Search results for: practice based research
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 44315

Search results for: practice based research

605 Effect of Preoxidation on the Effectiveness of Gd₂O₃ Nanoparticles Applied as a Source of Active Element in the Crofer 22 APU Coated with a Protective-conducting Spinel Layer

Authors: Łukasz Mazur, Kamil Domaradzki, Maciej Bik, Tomasz Brylewski, Aleksander Gil

Abstract:

Interconnects used in solid oxide fuel and electrolyzer cells (SOFCₛ/SOECs) serve several important functions, and therefore interconnect materials must exhibit certain properties. Their thermal expansion coefficient needs to match that of the ceramic components of these devices – the electrolyte, anode and cathode. Interconnects also provide structural rigidity to the entire device, which is why interconnect materials must exhibit sufficient mechanical strength at high temperatures. Gas-tightness is also a prerequisite since they separate gas reagents, and they also must provide very good electrical contact between neighboring cells over the entire operating time. High-chromium ferritic steels meets these requirements to a high degree but are affected by the formation of a Cr₂O₃ scale, which leads to increased electrical resistance. The final criterion for interconnect materials is chemical inertness in relation to the remaining cell components. In the case of ferritic steels, this has proved difficult due to the formation of volatile and reactive oxyhydroxides observed when Cr₂O3 is exposed to oxygen and water vapor. This process is particularly harmful on the cathode side in SOFCs and the anode side in SOECs. To mitigate this, protective-conducting ceramic coatings can be deposited on an interconnect's surface. The area-specific resistance (ASR) of a single interconnect cannot exceed 0.1 m-2 at any point of the device's operation. The rate at which the CrO₃ scale grows on ferritic steels can be reduced significantly via the so-called reactive element effect (REE). Research has shown that the deposition of Gd₂O₃ nanoparticles on the surface of the Crofer 22 APU, already modified using a protective-conducting spinel layer, further improves the oxidation resistance of this steel. However, the deposition of the manganese-cobalt spinel layer is a rather complex process and is performed at high temperatures in reducing and oxidizing atmospheres. There was thus reason to believe that this process may reduce the effectiveness of Gd₂O₃ nanoparticles added as an active element source. The objective of the present study was, therefore, to determine any potential impact by introducing a preoxidation stage after the nanoparticle deposition and before the steel is coated with the spinel. This should have allowed the nanoparticles to incorporate into the interior of the scale formed on the steel. Different samples were oxidized for 7000 h in air at 1073 K under quasi-isothermal conditions. The phase composition, chemical composition, and microstructure of the oxidation products formed on the samples were determined using X-ray diffraction, Raman spectroscopy, and scanning electron microscopy combined with energy-dispersive X-ray spectroscopy. A four-point, two-probe DC method was applied to measure ASR. It was found that coating deposition does indeed reduce the beneficial effect of Gd₂O₃ addition, since the smallest mass gain and the lowest ASR value were determined for the sample for which the additional preoxidation stage had been performed. It can be assumed that during this stage, gadolinium incorporates into and segregates at grain boundaries in the thin Cr₂O₃ that is forming. This allows the Gd₂O₃ nanoparticles to be a more effective source of the active element.

Keywords: interconnects, oxide nanoparticles, reactive element effect, SOEC, SOFC

Procedia PDF Downloads 59
604 Energy Refurbishment of University Building in Cold Italian Climate: Energy Audit and Performance Optimization

Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli

Abstract:

The Directive 2010/31/EC 'Directive of the European Parliament and of the Council of 19 may 2010 on the energy performance of buildings' moved the targets of the previous version toward more ambitious targets, for instance by establishing that, by 31 December 2020, all new buildings should demand nearly zero-energy. Moreover, the demonstrative role of public buildings is strongly affirmed so that also the target nearly zero-energy buildings is anticipated, in January 2019. On the other hand, given the very low turn-over rate of buildings (in Europe, it ranges between 1-3%/yearly), each policy that does not consider the renovation of the existing building stock cannot be effective in the short and medium periods. According to this proposal, the study provides a novel, holistic approach to design the refurbishment of educational buildings in colder cities of Mediterranean regions enabling stakeholders to understand the uncertainty to use numerical modelling and the real environmental and economic impacts of adopting some energy efficiency technologies. The case study is a university building of Molise region in the centre of Italy. The proposed approach is based on the application of the cost-optimal methodology as it is shown in the Delegate Regulation 244/2012 and Guidelines of the European Commission, for evaluating the cost-optimal level of energy performance with a macroeconomic approach. This means that the refurbishment scenario should correspond to the configuration that leads to lowest global cost during the estimated economic life-cycle, taking into account not only the investment cost but also the operational costs, linked to energy consumption and polluting emissions. The definition of the reference building has been supported by various in-situ surveys, investigations, evaluations of the indoor comfort. Data collection can be divided into five categories: 1) geometrical features; 2) building envelope audit; 3) technical system and equipment characterization; 4) building use and thermal zones definition; 5) energy building data. For each category, the required measures have been indicated with some suggestions for the identifications of spatial distribution and timing of the measurements. With reference to the case study, the collected data, together with a comparison with energy bills, allowed a proper calibration of a numerical model suitable for the hourly energy simulation by means of EnergyPlus. Around 30 measures/packages of energy, efficiency measure has been taken into account both on the envelope than regarding plant systems. Starting from results, two-point will be examined exhaustively: (i) the importance to use validated models to simulate the present performance of building under investigation; (ii) the environmental benefits and the economic implications of a deep energy refurbishment of the educational building in cold climates.

Keywords: energy simulation, modelling calibration, cost-optimal retrofit, university building

Procedia PDF Downloads 155
603 Identification of Clinical Characteristics from Persistent Homology Applied to Tumor Imaging

Authors: Eashwar V. Somasundaram, Raoul R. Wadhwa, Jacob G. Scott

Abstract:

The use of radiomics in measuring geometric properties of tumor images such as size, surface area, and volume has been invaluable in assessing cancer diagnosis, treatment, and prognosis. In addition to analyzing geometric properties, radiomics would benefit from measuring topological properties using persistent homology. Intuitively, features uncovered by persistent homology may correlate to tumor structural features. One example is necrotic cavities (corresponding to 2D topological features), which are markers of very aggressive tumors. We develop a data pipeline in R that clusters tumors images based on persistent homology is used to identify meaningful clinical distinctions between tumors and possibly new relationships not captured by established clinical categorizations. A preliminary analysis was performed on 16 Magnetic Resonance Imaging (MRI) breast tissue segments downloaded from the 'Investigation of Serial Studies to Predict Your Therapeutic Response with Imaging and Molecular Analysis' (I-SPY TRIAL or ISPY1) collection in The Cancer Imaging Archive. Each segment represents a patient’s breast tumor prior to treatment. The ISPY1 dataset also provided the estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2) status data. A persistent homology matrix up to 2-dimensional features was calculated for each of the MRI segmentation. Wasserstein distances were then calculated between all pairwise tumor image persistent homology matrices to create a distance matrix for each feature dimension. Since Wasserstein distances were calculated for 0, 1, and 2-dimensional features, three hierarchal clusters were constructed. The adjusted Rand Index was used to see how well the clusters corresponded to the ER/PR/HER2 status of the tumors. Triple-negative cancers (negative status for all three receptors) significantly clustered together in the 2-dimensional features dendrogram (Adjusted Rand Index of .35, p = .031). It is known that having a triple-negative breast tumor is associated with aggressive tumor growth and poor prognosis when compared to non-triple negative breast tumors. The aggressive tumor growth associated with triple-negative tumors may have a unique structure in an MRI segmentation, which persistent homology is able to identify. This preliminary analysis shows promising results in the use of persistent homology on tumor imaging to assess the severity of breast tumors. The next step is to apply this pipeline to other tumor segment images from The Cancer Imaging Archive at different sites such as the lung, kidney, and brain. In addition, whether other clinical parameters, such as overall survival, tumor stage, and tumor genotype data are captured well in persistent homology clusters will be assessed. If analyzing tumor MRI segments using persistent homology consistently identifies clinical relationships, this could enable clinicians to use persistent homology data as a noninvasive way to inform clinical decision making in oncology.

Keywords: cancer biology, oncology, persistent homology, radiomics, topological data analysis, tumor imaging

Procedia PDF Downloads 109
602 Management of Dysphagia after Supra Glottic Laryngectomy

Authors: Premalatha B. S., Shenoy A. M.

Abstract:

Background: Rehabilitation of swallowing is as vital as speech in surgically treated head and neck cancer patients to maintain nutritional support, enhance wound healing and improve quality of life. Aspiration following supraglottic laryngectomy is very common, and rehabilitation of the same is crucial which requires involvement of speech therapist in close contact with head and neck surgeon. Objectives: To examine the functions of swallowing outcomes after intensive therapy in supraglottic laryngectomy. Materials: Thirty-nine supra glottic laryngectomees were participated in the study. Of them, 36 subjects were males and 3 were females, in the age range of 32-68 years. Eighteen subjects had undergone standard supra glottis laryngectomy (Group1) for supraglottic lesions where as 21 of them for extended supraglottic laryngectomy (Group 2) for base tongue and lateral pharyngeal wall lesion. Prior to surgery visit by speech pathologist was mandatory to assess the sutability for surgery and rehabilitation. Dysphagia rehabilitation started after decannulation of tracheostoma by focusing on orientation about anatomy, physiological variation before and after surgery, which was tailor made for each individual based on their type and extent of surgery. Supraglottic diet - Soft solid with supraglottic swallow method was advocated to prevent aspiration. The success of intervention was documented as number of sessions taken to swallow different food consistency and also percentage of subjects who achieved satisfactory swallow in terms of number of weeks in both the groups. Results: Statistical data was computed in two ways in both the groups 1) to calculate percentage (%) of subjects who swallowed satisfactorily in the time frame of less than 3 weeks to more than 6 weeks, 2) number of sessions taken to swallow without aspiration as far as food consistency was concerned. The study indicated that in group 1 subjects of standard supraglottic laryngectomy, 61% (n=11) of them were successfully rehabilitated but their swallowing normalcy was delayed by an average 29th post operative day (3-6 weeks). Thirty three percentages (33%) (n=6) of the subjects could swallow satisfactorily without aspiration even before 3 weeks and only 5 % (n=1) of the needed more than 6 weeks to achieve normal swallowing ability. Group 2 subjects of extended SGL only 47 %( n=10) of them could achieved satisfactory swallow by 3-6 weeks and 24% (n=5) of them of them achieved normal swallowing ability before 3 weeks. Around 4% (n=1) needed more than 6 weeks and as high as 24 % (n=5) of them continued to be supplemented with naso gastric feeding even after 8-10 months post operative as they exhibited severe aspiration. As far as type of food consistencies were concerned group 1 subject could able to swallow all types without aspiration much earlier than group 2 subjects. Group 1 needed only 8 swallowing therapy sessions for thickened soft solid and 15 sessions for liquids whereas group 2 required 14 sessions for soft solid and 17 sessions for liquids to achieve swallowing normalcy without aspiration. Conclusion: The study highlights the importance of dysphagia intervention in supraglottic laryngectomees by speech pathologist.

Keywords: dysphagia management, supraglotic diet, supraglottic laryngectomy, supraglottic swallow

Procedia PDF Downloads 215
601 Evaluating Viability of Using South African Forestry Process Biomass Waste Mixtures as an Alternative Pyrolysis Feedstock in the Production of Bio Oil

Authors: Thembelihle Portia Lubisi, Malusi Ntandoyenkosi Mkhize, Jonas Kalebe Johakimu

Abstract:

Fertilizers play an important role in maintaining the productivity and quality of plants. Inorganic fertilizers (containing nitrogen, phosphorus, and potassium) are largely used in South Africa as they are considered inexpensive and highly productive. When applied, a portion of the excess fertilizer will be retained in the soil, a portion enters water streams due to surface runoff or the irrigation system adopted. Excess nutrient from the fertilizers entering the water stream eventually results harmful algal blooms (HABs) in freshwater systems, which not only disrupt wildlife but can also produce toxins harmful to humans. Use of agro-chemicals such as pesticides and herbicides has been associated with increased antimicrobial resistance (AMR) in humans as the plants are consumed by humans. This resistance of bacterial poses a threat as it prevents the Health sector from being able to treat infectious disease. Archaeological studies have found that pyrolysis liquids were already used in the time of the Neanderthal as a biocide and plant protection product. Pyrolysis is thermal degradation process of plant biomass or organic material under anaerobic conditions leading to production of char, bio-oils and syn gases. Bio-oil constituents can be categorized as water soluble (wood vinegar) and water insoluble fractions (tar and light oils). Wood vinegar (pyro-ligneous acid) is said to contain contains highly oxygenated compounds including acids, alcohols, aldehydes, ketones, phenols, esters, furans, and other multifunctional compounds with various molecular weights and compositions depending on the biomass material derived from and pyrolysis operating conditions. Various researchers have found the wood vinegar to be efficient in the eradication of termites, effective in plant protection and plant growth, has antibacterial characteristics and was found effective in inhibiting the micro-organisms such as candida yeast, E-coli, etc. This study investigated characterisation of South African forestry product processing waste with intention of evaluating the potential of using the respective biomass waste as feedstock for boil oil production via pyrolysis process. Ability to use biomass waste materials in production of wood-vinegar has advantages that it does not only allows for reduction of environmental pollution and landfill requirement, but it also does not negatively affect food security. The biomass wastes investigated were from the popular tree types in KZN, which are, pine saw dust (PSD), pine bark (PB), eucalyptus saw dust (ESD) and eucalyptus bark (EB). Furthermore, the research investigates the possibility of mixing the different wastes with an aim to lessen the cost of raw material separation prior to feeding into pyrolysis process and mixing also increases the amount of biomass material available for beneficiation. A 50/50 mixture of PSD and ESD (EPSD) and mixture containing pine saw dust; eucalyptus saw dust, pine bark and eucalyptus bark (EPSDB). Characterisation of the biomass waste will look at analysis such as proximate (volatiles, ash, fixed carbon), ultimate (carbon, hydrogen, nitrogen, oxygen, sulphur), high heating value, structural (cellulose, hemicellulose and lignin) and thermogravimetric analysis.

Keywords: characterisation, biomass waste, saw dust, wood waste

Procedia PDF Downloads 38
600 Optimizing the Effectiveness of Docetaxel with Solid Lipid Nanoparticles: Formulation, Characterization, in Vitro and in Vivo Assessment

Authors: Navid Mosallaei, Mahmoud Reza Jaafari, Mohammad Yahya Hanafi-Bojd, Shiva Golmohammadzadeh, Bizhan Malaekeh-Nikouei

Abstract:

Background: Docetaxel (DTX), a potent anticancer drug derived from the European yew tree, is effective against various human cancers by inhibiting microtubule depolymerization. Solid lipid nanoparticles (SLNs) have gained attention as drug carriers for enhancing drug effectiveness and safety. SLNs, submicron-sized lipid-based particles, can passively target tumors through the "enhanced permeability and retention" (EPR) effect, providing stability, drug protection, and controlled release while being biocompatible. Methods: The SLN formulation included biodegradable lipids (Compritol and Precirol), hydrogenated soy phosphatidylcholine (H-SPC) as a lipophilic co-surfactant, and Poloxamer 188 as a non-ionic polymeric stabilizer. Two SLN preparation techniques, probe sonication and microemulsion, were assessed. Characterization encompassed SLNs' morphology, particle size, zeta potential, matrix, and encapsulation efficacy. In-vitro cytotoxicity and cellular uptake studies were conducted using mouse colorectal (C-26) and human malignant melanoma (A-375) cell lines, comparing SLN-DTX with Taxotere®. In-vivo studies evaluated tumor inhibitory efficacy and survival in mice with colorectal (C-26) tumors, comparing SLNDTX withTaxotere®. Results: SLN-DTX demonstrated stability, with an average size of 180 nm and a low polydispersity index (PDI) of 0.2 and encapsulation efficacy of 98.0 ± 0.1%. Differential scanning calorimetry (DSC) suggested amorphous encapsulation of DTX within SLNs. In vitro studies revealed that SLN-DTX exhibited nearly equivalent cytotoxicity to Taxotere®, depending on concentration and exposure time. Cellular uptake studies demonstrated superior intracellular DTX accumulation with SLN-DTX. In a C-26 mouse model, SLN-DTX at 10 mg/kg outperformed Taxotere® at 10 and 20 mg/kg, with no significant differences in body weight changes and a remarkably high survival rate of 60%. Conclusion: This study concludes that SLN-DTX, prepared using the probe sonication, offers stability and enhanced therapeutic effects. It displayed almost same in vitro cytotoxicity to Taxotere® but showed superior cellular uptake. In a mouse model, SLN-DTX effectively inhibited tumor growth, with 10 mg/kg outperforming even 20 mg/kg of Taxotere®, without adverse body weight changes and with higher survival rates. This suggests that SLN-DTX has the potential to reduce adverse effects while maintaining or enhancing docetaxel's therapeutic profile, making it a promising drug delivery strategy suitable for industrialization.

Keywords: docetaxel, Taxotere®, solid lipid nanoparticles, enhanced permeability and retention effect, drug delivery, cancer chemotherapy, cytotoxicity, cellular uptake, tumor inhibition

Procedia PDF Downloads 62
599 Seismic Assessment of Flat Slab and Conventional Slab System for Irregular Building Equipped with Shear Wall

Authors: Muhammad Aji Fajari, Ririt Aprilin Sumarsono

Abstract:

Particular instability of structural building under lateral load (e.g earthquake) will rise due to irregularity in vertical and horizontal direction as stated in SNI 03-1762-2012. The conventional slab has been considered for its less contribution in increasing the stability of the structure, except special slab system such as flat slab turned into account. In this paper, the analysis of flat slab system at Sequis Tower located in South Jakarta will be assessed its performance under earthquake. It consists of 6 floors of the basement where the flat slab system is applied. The flat slab system will be the main focus in this paper to be compared for its performance with conventional slab system under earthquake. Regarding the floor plan of Sequis Tower basement, re-entrant corner signed for this building is 43.21% which exceeded the allowable re-entrant corner is 15% as stated in ASCE 7-05 Based on that, the horizontal irregularity will be another concern for analysis, otherwise vertical irregularity does not exist for this building. Flat slab system is a system where the slabs use drop panel with shear head as their support instead of using beams. Major advantages of flat slab application are decreasing dead load of structure, removing beams so that the clear height can be maximized, and providing lateral resistance due to lateral load. Whilst, deflection at middle strip and punching shear are problems to be detail considered. Torsion usually appears when the structural member under flexure such as beam or column dimension is improper in ratio. Considering flat slab as alternative slab system will keep the collapse due to torsion down. Common seismic load resisting system applied in the building is a shear wall. Installation of shear wall will keep the structural system stronger and stiffer affecting in reduced displacement under earthquake. Eccentricity of shear wall location of this building resolved the instability due to horizontal irregularity so that the earthquake load can be absorbed. Performing linear dynamic analysis such as response spectrum and time history analysis due to earthquake load is suitable as the irregularity arise so that the performance of structure can be significantly observed. Utilization of response spectrum data for South Jakarta which PGA 0.389g is basic for the earthquake load idealization to be involved in several load combinations stated on SNI 03-1726-2012. The analysis will result in some basic seismic parameters such as period, displacement, and base shear of the system; besides the internal forces of the critical member will be presented. Predicted period of a structure under earthquake load is 0.45 second, but as different slab system applied in the analysis then the period will show a different value. Flat slab system will probably result in better performance for the displacement parameter compare to conventional slab system due to higher contribution of stiffness to the whole system of the building. In line with displacement, the deflection of the slab will result smaller for flat slab than a conventional slab. Henceforth, shear wall will be effective to strengthen the conventional slab system than flat slab system.

Keywords: conventional slab, flat slab, horizontal irregularity, response spectrum, shear wall

Procedia PDF Downloads 169
598 Various Shaped ZnO and ZnO/Graphene Oxide Nanocomposites and Their Use in Water Splitting Reaction

Authors: Sundaram Chandrasekaran, Seung Hyun Hur

Abstract:

Exploring strategies for oxygen vacancy engineering under mild conditions and understanding the relationship between dislocations and photoelectrochemical (PEC) cell performance are challenging issues for designing high performance PEC devices. Therefore, it is very important to understand that how the oxygen vacancies (VO) or other defect states affect the performance of the photocatalyst in photoelectric transfer. So far, it has been found that defects in nano or micro crystals can have two possible significances on the PEC performance. Firstly, an electron-hole pair produced at the interface of photoelectrode and electrolyte can recombine at the defect centers under illumination of light, thereby reducing the PEC performances. On the other hand, the defects could lead to a higher light absorption in the longer wavelength region and may act as energy centers for the water splitting reaction that can improve the PEC performances. Even if the dislocation growth of ZnO has been verified by the full density functional theory (DFT) calculations and local density approximation calculations (LDA), it requires further studies to correlate the structures of ZnO and PEC performances. Exploring the hybrid structures composed of graphene oxide (GO) and ZnO nanostructures offer not only the vision of how the complex structure form from a simple starting materials but also the tools to improve PEC performances by understanding the underlying mechanisms of mutual interactions. As there are few studies for the ZnO growth with other materials and the growth mechanism in those cases has not been clearly explored yet, it is very important to understand the fundamental growth process of nanomaterials with the specific materials, so that rational and controllable syntheses of efficient ZnO-based hybrid materials can be designed to prepare nanostructures that can exhibit significant PEC performances. Herein, we fabricated various ZnO nanostructures such as hollow sphere, bucky bowl, nanorod and triangle, investigated their pH dependent growth mechanism, and correlated the PEC performances with them. Especially, the origin of well-controlled dislocation-driven growth and its transformation mechanism of ZnO nanorods to triangles on the GO surface were discussed in detail. Surprisingly, the addition of GO during the synthesis process not only tunes the morphology of ZnO nanocrystals and also creates more oxygen vacancies (oxygen defects) in the lattice of ZnO, which obviously suggest that the oxygen vacancies be created by the redox reaction between GO and ZnO in which the surface oxygen is extracted from the surface of ZnO by the functional groups of GO. On the basis of our experimental and theoretical analysis, the detailed mechanism for the formation of specific structural shapes and oxygen vacancies via dislocation, and its impact in PEC performances are explored. In water splitting performance, the maximum photocurrent density of GO-ZnO triangles was 1.517mA/cm-2 (under UV light ~ 360 nm) vs. RHE with high incident photon to current conversion Efficiency (IPCE) of 10.41%, which is the highest among all samples fabricated in this study and also one of the highest IPCE reported so far obtained from GO-ZnO triangular shaped photocatalyst.

Keywords: dislocation driven growth, zinc oxide, graphene oxide, water splitting

Procedia PDF Downloads 268
597 Development of a Mixed-Reality Hands-Free Teleoperated Robotic Arm for Construction Applications

Authors: Damith Tennakoon, Mojgan Jadidi, Seyedreza Razavialavi

Abstract:

With recent advancements of automation in robotics, from self-driving cars to autonomous 4-legged quadrupeds, one industry that has been stagnant is the construction industry. The methodologies used in a modern-day construction site consist of arduous physical labor and the use of heavy machinery, which has not changed over the past few decades. The dangers of a modern-day construction site affect the health and safety of the workers due to performing tasks such as lifting and moving heavy objects and having to maintain unhealthy posture to complete repetitive tasks such as painting, installing drywall, and laying bricks. Further, training for heavy machinery is costly and requires a lot of time due to their complex control inputs. The main focus of this research is using immersive wearable technology and robotic arms to perform the complex and intricate skills of modern-day construction workers while alleviating the physical labor requirements to perform their day-to-day tasks. The methodology consists of mounting a stereo vision camera, the ZED Mini by Stereolabs, onto the end effector of an industrial grade robotic arm, streaming the video feed into the Virtual Reality (VR) Meta Quest 2 (Quest 2) head-mounted display (HMD). Due to the nature of stereo vision, and the similar field-of-views between the stereo camera and the Quest 2, human-vision can be replicated on the HMD. The main advantage this type of camera provides over a traditional monocular camera is it gives the user wearing the HMD a sense of the depth of the camera scene, specifically, a first-person view of the robotic arm’s end effector. Utilizing the built-in cameras of the Quest 2 HMD, open-source hand-tracking libraries from OpenXR can be implemented to track the user’s hands in real-time. A mixed-reality (XR) Unity application can be developed to localize the operator's physical hand motions with the end-effector of the robotic arm. Implementing gesture controls will enable the user to move the robotic arm and control its end-effector by moving the operator’s arm and providing gesture inputs from a distant location. Given that the end effector of the robotic arm is a gripper tool, gripping and opening the operator’s hand will translate to the gripper of the robot arm grabbing or releasing an object. This human-robot interaction approach provides many benefits within the construction industry. First, the operator’s safety will be increased substantially as they can be away from the site-location while still being able perform complex tasks such as moving heavy objects from place to place or performing repetitive tasks such as painting walls and laying bricks. The immersive interface enables precision robotic arm control and requires minimal training and knowledge of robotic arm manipulation, which lowers the cost for operator training. This human-robot interface can be extended to many applications, such as handling nuclear accident/waste cleanup, underwater repairs, deep space missions, and manufacturing and fabrication within factories. Further, the robotic arm can be mounted onto existing mobile robots to provide access to hazardous environments, including power plants, burning buildings, and high-altitude repair sites.

Keywords: construction automation, human-robot interaction, hand-tracking, mixed reality

Procedia PDF Downloads 56
596 Genetic Screening of Sahiwal Bulls for Higher Fertility

Authors: Atul C. Mahajan, A. K. Chakravarty, V. Jamuna, C. S. Patil, Neeraj Kashyap, Bharti Deshmukh, Vijay Kumar

Abstract:

The selection of Sahiwal bulls on the basis of dams best lactation milk yield under breeding programme in herd of the country neglecting fertility traits leads to deterioration in their performances and economy. The goal of this study was to explore polymorphism of CRISP2 gene and their association with semen traits (Post Thaw Motility, Hypo-osmotic Swelling Test, Acrosome Integrity, DNA Fragmentation and capacitation status), scrotal circumference, expected predicted difference (EPD) for milk yield and fertility. Sahiwal bulls included in present study were 60 bulls used in breeding programme as well as 50 young bulls yet to be included in breeding programme. All the Sahiwal bulls were found to be polymorphic for CRISP2 gene (AA, AG and GG) present within exon 7 to the position 589 of CRISP2 mRNA by using PCR-SSCP and Sequencing. Semen analysis were done on 60 breeding bulls frozen semen doses pertaining to four season (winter, summer, rainy and autumn). The scrotal circumference was measured from existing Sahiwal breeding bulls in the herd (n=47). The effect of non-genetic factors on reproduction traits were studied by least-squares technique and the significant difference of means between subclasses of season, period, parity and age group were tested. The data were adjusted for the significant non-genetic factors to remove the differential environmental effects. The adjusted data were used to generate traits like Waiting Period (WP), Pregnancy Rate (PR), Expected Predicted Difference (EPD) of fertility, respectively. Genetic and phenotypic parameters of reproduction traits were estimated. The overall least-squares means of Age at First Calving (AFC), Service Period (SP) and WP were estimated as 36.69 ± 0.18 months, 120.47 ± 8.98 days and 79.78 ± 3.09 days respectively. Season and period of birth had significant effect (p < 0.01) on AFC. AFC was highest during autumn season of birth followed by summer, winter and rainy. Season and period of calving had significant effect (p < 0.01) on SP and WP of sahiwal cows. The WP for Sahiwal cows was standardized based on four developed predicted model for pregnancy rate 42, 63, 84 and 105 days using all lactation records. The WP for Sahiwal cows were standardized as 42 days. A selection criterion was developed for Sahiwal breeding bulls and young Sahiwal bulls on the basis of EPD of fertility. The genotype has significant effect on expected predicted difference of fertility and some semen parameters like post thaw motility and HOST. AA Genotype of CRISP2 gene revealed better EPD for fertility than EPD of milk yield. AA genotype of CRISP2 gene has higher scrotal circumference than other genotype. For young Sahiwal bulls only AA genotypes were present with similar patterns. So on the basis of association of genotype with seminal traits, EPD of milk yield and EPD for fertility status, AA and AG genotype of CRISP2 gene was better for higher fertility in Sahiwal bulls.

Keywords: expected predicted difference, fertility, sahiwal, waiting period

Procedia PDF Downloads 568
595 Neoliberal Settler City: Socio-Spatial Segregation, Livelihood of Artists/Craftsmen in Delhi

Authors: Sophy Joseph

Abstract:

The study uses the concept of ‘Settler city’ to understand the nature of peripheralization that a neoliberal city initiates. The settler city designs powerless communities without inherent rights, title and sovereignty. Kathputli Colony, home to generations of artists/craftsmen, who have kept heritage of arts/crafts alive, has undergone eviction of its population from urban space. The proposed study, ‘Neoliberal Settler City: Socio-spatial segregation and livelihood of artists/craftsmen in Delhi’ would problematize the settler city as a colonial technology. The colonial regime has ‘erased’ the ‘unwanted’ as primitive and swept them to peripheries in the city. This study would also highlight how structural change in political economy has undermined their crafts/arts by depriving them from practicing/performing it with dignity in urban space. The interconnections between citizenship and In-Situ Private Public Partnership in Kathputli rehabilitation has become part of academic exercise. However, a comprehensive study connecting inherent characteristics of neoliberal settler city, trajectory of political economy of unorganized workers - artists/craftsmen and legal containment and exclusion leading to dispossession and marginalization of communities from the city site, is relevant to contextualize the trauma of spatial segregation. This study would deal with political, cultural, social and economic dominant behavior of the structure in the state formation, accumulation of property and design of urban space, fueled by segregation of marginalized/unorganized communities and disowning the ‘footloose proletariat’, the migrant workforce. The methodology of study involves qualitative research amongst communities and the field work-oral testimonies and personal accounts- becomes the primary material to theorize the realities. The secondary materials in the forms of archival materials about historical evolution of Delhi as a planned city from various archives, would be used. As the study also adopt ‘narrative approach’ in qualitative study, the life experiences of craftsmen/artists as performers and emotional trauma of losing their livelihood and space forms an important record to understand the instability and insecurity that marginalization and development attributes on urban poor. The study attempts to prove that though there was a change in political tradition from colonialism to constitutional democracy, new state still follows the policy of segregation and dispossession of the communities. It is this dispossession from the space, deprivation of livelihood and non-consultative process in rehabilitation that reflects the neoliberal approach of the state and also critical findings in the study. This study would entail critical spatial lens analyzing ethnographic and sociological data, representational practices and development debates to understand ‘urban otherization’ against craftsmen/artists. This seeks to develop a conceptual framework for understanding the resistance of communities against primitivity attached with them and to decolonize the city. This would help to contextualize the demand for declaring Kathputli Colony as ‘heritage artists village’. The conceptualization and contextualization would help to argue for right to city of the communities, collective rights to property, services and self-determination. The aspirations of the communities also help to draw normative orientation towards decolonization. It is important to study this site as part of the framework, ‘inclusive cities’ because cities are rarely noted as important sites of ‘community struggles’.

Keywords: neoliberal settler city, socio-spatial segregation, the livelihood of artists/craftsmen, dispossession of indigenous communities, urban planning and cultural uprooting

Procedia PDF Downloads 110
594 The Spanish Didactic Book 'El Calculo Y La Medida en El Primer Grado De La Escuela Decroly' (1934): A Look at the Mathematical Knowledge

Authors: Juliana Chiarini Balbino Fernandes

Abstract:

This article aims to investigate the Spanish didactic book, entitled ‘El Calculo y La Medida en El Primer Grado de La Escuela Decroly’, written by Dr. O. Decroly and A. Hamaide, published in Madrid, in the year 1934. In addition to analyzing how mathematical knowledge is present in the proposed Centers of Interest. The textbooks, in addition to pedagogical tools, reflect a certain moment in society and allow the analysis of the theoretical-methodological proposal that can be implemented by the teacher. The study proposed here will be carried out by the lens of Cultural History, supported by Roger Chartier (1991) and by the concepts on textbooks, based on Alain Choppin (2004). The textbook selected for this study exposes a program of ideas associated with the method of Centers of Interest and arithmetic is linked to these interests. In the first courses (six to eight years), most centers can be considered to correspond to occasional calls, as they take advantage of events that arise spontaneously to work with observation, measurement, association and expression exercises. The program of ideas associated with Centers of Interest addresses the biological and social aspects of children, as long as they can express their needs for activities and games, satisfying the natural curiosity. Still, the program of associated ideas offers occasions for problems whose data are taken in observation exercises and concrete expressions (manuals, drawings). In the method applied at the school of L'Ermitage, school created by Decroly in Belgium in 1907, observation, is the basis of each center of interest. It offers the chance to compare and measure. To observe is more than to perceive; it is also to establish relations between the graded aspects of the same object, to seek relations between different intensities; is to verify successions, special and temporary relationships; is to make comparisons, to notice differences and similarities in block or datable (analysis), is to establish a bridge between the world and the thought. To make the observation more precise, it is important to compare, measure, and resort to considered objects as natural units of measure. Measurement and calculation are, therefore, quite naturally subject to observation. Thus, it is possible to make the child enter into the interest in the calculation, linking it to the observation. It was observed that the Centers of Interest, according to Decroly, should respond to the concerns and attend to the motivations of the students and the teaching of arithmetical must obey a logical seriation, considering the interest and the experience of the children. The teaching of arithmetical should not be limited to the schedule, it should cover every quantitative aspect that arises in the other disciplines. The feeling of unity is established in observation, association and expression, which coordinate a whole program of cultural activities, concentrating it around a central idea.

Keywords: didactic book, centers of interest, mathematical knowledge, primary education

Procedia PDF Downloads 90
593 Electrophoretic Light Scattering Based on Total Internal Reflection as a Promising Diagnostic Method

Authors: Ekaterina A. Savchenko, Elena N. Velichko, Evgenii T. Aksenov

Abstract:

The development of pathological processes, such as cardiovascular and oncological diseases, are accompanied by changes in molecular parameters in cells, tissues, and serum. The study of the behavior of protein molecules in solutions is of primarily importance for diagnosis of such diseases. Various physical and chemical methods are used to study molecular systems. With the advent of the laser and advances in electronics, optical methods, such as scanning electron microscopy, sedimentation analysis, nephelometry, static and dynamic light scattering, have become the most universal, informative and accurate tools for estimating the parameters of nanoscale objects. The electrophoretic light scattering is the most effective technique. It has a high potential in the study of biological solutions and their properties. This technique allows one to investigate the processes of aggregation and dissociation of different macromolecules and obtain information on their shapes, sizes and molecular weights. Electrophoretic light scattering is an analytical method for registration of the motion of microscopic particles under the influence of an electric field by means of quasi-elastic light scattering in a homogeneous solution with a subsequent registration of the spectral or correlation characteristics of the light scattered from a moving object. We modified the technique by using the regime of total internal reflection with the aim of increasing its sensitivity and reducing the volume of the sample to be investigated, which opens the prospects of automating simultaneous multiparameter measurements. In addition, the method of total internal reflection allows one to study biological fluids on the level of single molecules, which also makes it possible to increase the sensitivity and the informativeness of the results because the data obtained from an individual molecule is not averaged over an ensemble, which is important in the study of bimolecular fluids. To our best knowledge the study of electrophoretic light scattering in the regime of total internal reflection is proposed for the first time, latex microspheres 1 μm in size were used as test objects. In this study, the total internal reflection regime was realized on a quartz prism where the free electrophoresis regime was set. A semiconductor laser with a wavelength of 655 nm was used as a radiation source, and the light scattering signal was registered by a pin-diode. Then the signal from a photodetector was transmitted to a digital oscilloscope and to a computer. The autocorrelation functions and the fast Fourier transform in the regime of Brownian motion and under the action of the field were calculated to obtain the parameters of the object investigated. The main result of the study was the dependence of the autocorrelation function on the concentration of microspheres and the applied field magnitude. The effect of heating became more pronounced with increasing sample concentrations and electric field. The results obtained in our study demonstrated the applicability of the method for the examination of liquid solutions, including biological fluids.

Keywords: light scattering, electrophoretic light scattering, electrophoresis, total internal reflection

Procedia PDF Downloads 190
592 The Combined Use of L-Arginine and Progesterone During the Post-breeding Period in Female Rabbits Increases the Weight of Their Fetuses

Authors: Diego F. Carrillo-González, Milena Osorio, Natalia M. Cerro, Yasser Y. Lenis

Abstract:

Introduction: mortality during the implantation and early embryonic development periods reach around 30% in different mammalian species. It has been described that progesterone (P4) and Arginine (Arg) play a beneficial role in establishing and maintaining early pregnancy in mammals. The combined effect between Arg and P4 on reproductive parameters in the rabbit species is not yet elucidated, to our best knowledge. Objective: to assess the effect of L-arginine and progesterone during the post-breeding period in female rabbits on the composition of the amniotic fluid, the placental structure, and the bone growth in their fetuses. Methods: crossbred female rabbits (n=16) were randomly distributed into four experimental groups (Ctrl, Arg, P4, and Arg+P4). In the control group, 0.9% saline solution was administered as a placebo, the Arg group was administered arginine (50 mg/kg BW) from day 4.5 to day 19 post-breeding, the P4 group was administered progesterone (Gestavec®, 1.5 mg/kg BW) from 24 hours to day 4 post-breeding and for the Arg+P4 group, an administration was performed under the same time and dose guidelines as the Arg and P4 treatments. Four females were sacrificed, and the amniotic fluid was collected and analyzed with rapid urine test strips, while the placenta and fetuses were processed in the laboratory to obtain histological plates. The percentage of deciduous, labyrinthine, and junctional zones was determined, and the length of the femur for each fetus was measured as an indicator of growth. Descriptive statistics were applied to identify the success rates for each of the tests. Afterwards, A one-way analysis of variance (ANOVA) was performed, and a comparison of means was conducted by Tukey's test. Results: a higher density (p<0.05) was observed in the amniotic fluid for fetuses in the control group (1022±2.5g/mL) compared to the P4 (1015±5.3g/mL) and Arg+P4 (1016±4,9g/mL) groups. Additionally, the density of amniotic fluid in the Arg group (1021±2.5g/mL) was higher (p<0.05) than in the P4 group. The concentration of protein, glucose, and ascorbic acid had no statistical difference between treatments (p>0.05). The histological analysis of the uteroplacental regions, a statistical difference (p<0,05) in the proportion of deciduous zone was found between the P4 group (9.6±2.6%) when compared with the Ctrl (28.15±12.3%), and Arg+P4 (26.3±4.9) groups. In the analysis of the fetuses, the weight was higher for the Arg group (2.69±0.18), compared to the other groups (p<0.05), while a shorter length was observed (p<0.05) in the fetuses for the Arg+P4 group (25.97±1.17). However, no difference (p>0.05) was found when comparing the length of the developing femurs between the experimental groups. Conclusion: the combination of L-arginine and progesterone allows a reduction in the density of amniotic fluid, without affecting the protein, energy, and antioxidant components. However, the use of L-arginine stimulates weight gain in fetuses, without affecting size, which could be used to improve production parameters in rabbit production systems. In addition, the modification in the deciduous zone could show a placental adaptation based on the fetal growth process, however more specific studies on the placentation process are required.

Keywords: arginine, progesterone, rabbits, reproduction

Procedia PDF Downloads 64
591 India’s Energy Transition, Pathways for Green Economy

Authors: B. Sudhakara Reddy

Abstract:

In modern economy, energy is fundamental to virtually every product and service in use. It has been developed on the dependence of abundant and easy-to-transform polluting fossil fuels. On one hand, increase in population and income levels combined with increased per capita energy consumption requires energy production to keep pace with economic growth, and on the other, the impact of fossil fuel use on environmental degradation is enormous. The conflicting policy objectives of protecting the environment while increasing economic growth and employment has resulted in this paradox. Hence, it is important to decouple economic growth from environmental degeneration. Hence, the search for green energy involving affordable, low-carbon, and renewable energies has become global priority. This paper explores a transition to a sustainable energy system using the socio-economic-technical scenario method. This approach takes into account the multifaceted nature of transitions which not only require the development and use of new technologies, but also of changes in user behaviour, policy and regulation. The scenarios that are developed are: baseline business as usual (BAU) as well as green energy (GE). The baseline scenario assumes that the current trends (energy use, efficiency levels, etc.) will continue in future. India’s population is projected to grow by 23% during 2010 –2030, reaching 1.47 billion. The real GDP, as per the model, is projected to grow by 6.5% per year on average between 2010 and 2030 reaching US$5.1 trillion or $3,586 per capita (base year 2010). Due to increase in population and GDP, the primary energy demand will double in two decades reaching 1,397 MTOE in 2030 with the share of fossil fuels remaining around 80%. The increase in energy use corresponds to an increase in energy intensity (TOE/US $ of GDP) from 0.019 to 0.036. The carbon emissions are projected to increase by 2.5 times from 2010 reaching 3,440 million tonnes with per capita emissions of 2.2 tons/annum. However, the carbon intensity (tons per US$ of GDP) decreases from 0.96 to 0.67. As per GE scenario, energy use will reach 1079 MTOE by 2030, a saving of about 30% over BAU. The penetration rate of renewable energy resources will reduce the total primary energy demand by 23% under GE. The reduction in fossil fuel demand and focus on clean energy will reduce the energy intensity to 0.21 (TOE/US$ of GDP) and carbon intensity to 0.42 (ton/US$ of GDP) under the GE scenario. The study develops new ‘pathways out of poverty’ by creating more than 10 million jobs and thus raise the standard of living of low-income people. Our scenarios are, to a great extent, based on the existing technologies. The challenges to this path lie in socio-economic-political domains. However, to attain a green economy the appropriate policy package should be in place which will be critical in determining the kind of investments that will be needed and the incidence of costs and benefits. These results provide a basis for policy discussions on investments, policies and incentives to be put in place by national and local governments.

Keywords: energy, renewables, green technology, scenario

Procedia PDF Downloads 225
590 Data Quality on Regular Childhood Immunization Programme at Degehabur District: Somali Region, Ethiopia

Authors: Eyob Seife

Abstract:

Immunization is a life-saving intervention which prevents needless suffering through sickness, disability, and death. Emphasis on data quality and use will become even stronger with the development of the immunization agenda 2030 (IA2030). Quality of data is a key factor in generating reliable health information that enables monitoring progress, financial planning, vaccine forecasting capacities, and making decisions for continuous improvement of the national immunization program. However, ensuring data of sufficient quality and promoting an information-use culture at the point of the collection remains critical and challenging, especially in hard-to-reach and pastoralist areas where Degehabur district is selected based on a hypothesis of ‘there is no difference in reported and recounted immunization data consistency. Data quality is dependent on different factors where organizational, behavioral, technical, and contextual factors are the mentioned ones. A cross-sectional quantitative study was conducted on September 2022 in the Degehabur district. The study used the world health organization (WHO) recommended data quality self-assessment (DQS) tools. Immunization tally sheets, registers, and reporting documents were reviewed at 5 health facilities (2 health centers and 3 health posts) of primary health care units for one fiscal year (12 months) to determine the accuracy ratio. The data was collected by trained DQS assessors to explore the quality of monitoring systems at health posts, health centers, and the district health office. A quality index (QI) was assessed, and the accuracy ratio formulated were: the first and third doses of pentavalent vaccines, fully immunized (FI), and the first dose of measles-containing vaccines (MCV). In this study, facility-level results showed both over-reporting and under-reporting were observed at health posts when computing the accuracy ratio of the tally sheet to health post reports found at health centers for almost all antigens verified where pentavalent 1 was 88.3%, 60.4%, and 125.6% for Health posts A, B, and C respectively. For first-dose measles-containing vaccines (MCV), similarly, the accuracy ratio was found to be 126.6%, 42.6%, and 140.9% for Health posts A, B, and C, respectively. The accuracy ratio for fully immunized children also showed 0% for health posts A and B and 100% for health post-C. A relatively better accuracy ratio was seen at health centers where the first pentavalent dose was 97.4% and 103.3% for health centers A and B, while a first dose of measles-containing vaccines (MCV) was 89.2% and 100.9% for health centers A and B, respectively. A quality index (QI) of all facilities also showed results between the maximum of 33.33% and a minimum of 0%. Most of the verified immunization data accuracy ratios were found to be relatively better at the health center level. However, the quality of the monitoring system is poor at all levels, besides poor data accuracy at all health posts. So attention should be given to improving the capacity of staff and quality of monitoring system components, namely recording, reporting, archiving, data analysis, and using information for decision at all levels, especially in pastoralist areas where such kinds of study findings need to be improved beside to improving the data quality at root and health posts level.

Keywords: accuracy ratio, Degehabur District, regular childhood immunization program, quality of monitoring system, Somali Region-Ethiopia

Procedia PDF Downloads 78
589 Carbon Aerogels with Tailored Porosity as Cathode in Li-Ion Capacitors

Authors: María Canal-Rodríguez, María Arnaiz, Natalia Rey-Raap, Ana Arenillas, Jon Ajuria

Abstract:

The constant demand of electrical energy, as well as the increase in environmental concern, lead to the necessity of investing in clean and eco-friendly energy sources that implies the development of enhanced energy storage devices. Li-ion batteries (LIBs) and Electrical double layer capacitors (EDLCs) are the most widespread energy systems. Batteries are able to storage high energy densities contrary to capacitors, which main strength is the high-power density supply and the long cycle life. The combination of both technologies gave rise to Li-ion capacitors (LICs), which offers all these advantages in a single device. This is achieved combining a capacitive, supercapacitor-like positive electrode with a faradaic, battery-like negative electrode. Due to the abundance and affordability, dual carbon-based LICs are nowadays the common technology. Normally, an Active Carbon (AC) is used as the EDLC like electrode, while graphite is the material commonly employed as anode. LICs are potential systems to be used in applications in which high energy and power densities are required, such us kinetic energy recovery systems. Although these devices are already in the market, some drawbacks like the limited power delivered by graphite or the energy limiting nature of AC must be solved to trigger their used. Focusing on the anode, one possibility could be to replace graphite with Hard Carbon (HC). The better rate capability of the latter increases the power performance of the device. Moreover, the disordered carbonaceous structure of HCs enables storage twice the theoretical capacity of graphite. With respect to the cathode, the ACs are characterized for their high volume of micropores, in which the charge is storage. Nevertheless, they normally do not show mesoporous, which are really important mainly at high C-rates as they act as transport channels for the ions to reach the micropores. Usually, the porosity of ACs cannot be tailored, as it strongly depends on the precursor employed to get the final carbon. Moreover, they are not characterized for having a high electrical conductivity, which is an important characteristic to get a good performance in energy storage applications. A possible candidate to substitute ACs are carbon aerogels (CAs). CAs are materials that combine a high porosity with great electrical conductivity, opposite characteristics in carbon materials. Furthermore, its porous properties can be tailored quite accurately according to with the requirements of the application. In the present study, CAs with controlled porosity were obtained from polymerization of resorcinol and formaldehyde by microwave heating. Varying the synthesis conditions, mainly the amount of precursors and pH of the precursor solution, carbons with different textural properties were obtained. The way the porous characteristics affect the performance of the cathode was studied by means of a half-cell configuration. The material with the best performance was evaluated as cathode in a LIC versus a hard carbon as anode. An analogous full LIC made by a high microporous commercial cathode was also assembled for comparison purposes.

Keywords: li-ion capacitors, energy storage, tailored porosity, carbon aerogels

Procedia PDF Downloads 141
588 Optimal Control of Generators and Series Compensators within Multi-Space-Time Frame

Authors: Qian Chen, Lin Xu, Ping Ju, Zhuoran Li, Yiping Yu, Yuqing Jin

Abstract:

The operation of power grid is becoming more and more complex and difficult due to its rapid development towards high voltage, long distance, and large capacity. For instance, many large-scale wind farms have connected to power grid, where their fluctuation and randomness is very likely to affect the stability and safety of the grid. Fortunately, many new-type equipments based on power electronics have been applied to power grid, such as UPFC (Unified Power Flow Controller), TCSC (Thyristor Controlled Series Compensation), STATCOM (Static Synchronous Compensator) and so on, which can help to deal with the problem above. Compared with traditional equipment such as generator, new-type controllable devices, represented by the FACTS (Flexible AC Transmission System), have more accurate control ability and respond faster. But they are too expensive to use widely. Therefore, on the basis of the comparison and analysis of the controlling characteristics between traditional control equipment and new-type controllable equipment in both time and space scale, a coordinated optimizing control method within mutil-time-space frame is proposed in this paper to bring both kinds of advantages into play, which can better both control ability and economical efficiency. Firstly, the coordination of different space sizes of grid is studied focused on the fluctuation caused by large-scale wind farms connected to power grid. With generator, FSC (Fixed Series Compensation) and TCSC, the coordination method on two-layer regional power grid vs. its sub grid is studied in detail. The coordination control model is built, the corresponding scheme is promoted, and the conclusion is verified by simulation. By analysis, interface power flow can be controlled by generator and the specific line power flow between two-layer regions can be adjusted by FSC and TCSC. The smaller the interface power flow adjusted by generator, the bigger the control margin of TCSC, instead, the total consumption of generator is much higher. Secondly, the coordination of different time sizes is studied to further the amount of the total consumption of generator and the control margin of TCSC, where the minimum control cost can be acquired. The coordination method on two-layer ultra short-term correction vs. AGC (Automatic Generation Control) is studied with generator, FSC and TCSC. The optimal control model is founded, genetic algorithm is selected to solve the problem, and the conclusion is verified by simulation. Finally, the aforementioned method within multi-time-space scale is analyzed with practical cases, and simulated on PSASP (Power System Analysis Software Package) platform. The correctness and effectiveness are verified by the simulation result. Moreover, this coordinated optimizing control method can contribute to the decrease of control cost and will provide reference to the following studies in this field.

Keywords: FACTS, multi-space-time frame, optimal control, TCSC

Procedia PDF Downloads 247
587 Monitoring of Vector Mosquitors of Diseases in Areas of Energy Employment Influence in the Amazon (Amapa State), Brazil

Authors: Ribeiro Tiago Magalhães

Abstract:

Objective: The objective of this study was to evaluate the influence of a hydroelectric power plant in the state of Amapá, and to present the results obtained by dimensioning the diversity of the main mosquito vectors involved in the transmission of pathogens that cause diseases such as malaria, dengue and leishmaniasis. Methodology: The present study was conducted on the banks of the Araguari River, in the municipalities of Porto Grande and Ferreira Gomes in the southern region of Amapá State. Nine monitoring campaigns were conducted, the first in April 2014 and the last in March 2016. The selection of the catch sites was done in order to prioritize areas with possible occurrence of the species considered of greater importance for public health and areas of contact between the wild environment and humans. Sampling efforts aimed to identify the local vector fauna and to relate it to the transmission of diseases. In this way, three phases of collection were established, covering the schedules of greater hematophageal activity. Sampling was carried out using Shannon Shack and CDC types of light traps and by means of specimen collection with the hold method. This procedure was carried out during the morning (between 08:00 and 11:00), afternoon-twilight (between 15:30 and 18:30) and night (between 18:30 and 22:00). In the specific methodology of capture with the use of the CDC equipment, the delimited times were from 18:00 until 06:00 the following day. Results: A total of 32 species of mosquitoes was identified, and a total of 2,962 specimens was taxonomically subdivided into three genera (Culicidae, Psychodidae and Simuliidae) Psorophora, Sabethes, Simulium, Uranotaenia and Wyeomyia), besides those represented by the family Psychodidae that due to the morphological complexities, allows the safe identification (without the method of diaphanization and assembly of slides for microscopy), only at the taxonomic level of subfamily (Phlebotominae). Conclusion: The nine monitoring campaigns carried out provided the basis for the design of the possible epidemiological structure in the areas of influence of the Cachoeira Caldeirão HPP, in order to point out among the points established for sampling, which would represent greater possibilities, according to the group of identified mosquitoes, of disease acquisition. However, what should be mainly considered, are the future events arising from reservoir filling. This argument is based on the fact that the reproductive success of Culicidae is intrinsically related to the aquatic environment for the development of its larvae until adulthood. From the moment that the water mirror is expanded in new environments for the formation of the reservoir, a modification in the process of development and hatching of the eggs deposited in the substrate can occur, causing a sudden explosion in the abundance of some genera, in special Anopheles, which holds preferences for denser forest environments, close to the water portions.

Keywords: Amazon, hydroelectric, power, plants

Procedia PDF Downloads 162
586 Patterns and Predictors of Intended Service Use among Frail Older Adults in Urban China

Authors: Yuanyuan Fu

Abstract:

Background and Purpose: Along with the change of society and economy, the traditional home function of old people has gradually weakened in the contemporary China. Acknowledging these situations, to better meet old people’s needs on formal services and improve the quality of later life, this study seeks to identify patterns of intended service use among frail old people living in the communities and examined determinants that explain heterogeneous variations in old people’s intended service use patterns. Additionally, this study also tested the relationship between culture value and intended service use patterns and the mediating role of enabling factors in terms of culture value and intended service use patterns. Methods:Participants were recruited from Haidian District, Beijing, China in 2015. The multi-stage sampling method was adopted to select sub-districts, communities and old people aged 70 years old or older. After screening, 577 old people with limitations in daily life, were successfully interviewed. After data cleaning, 550 samples were included for data analysis. This study establishes a conceptual framework based on the Anderson Model (including predisposing factors, enabling factors and need factors), and further developed it by adding culture value factors (including attitudes towards filial piety and attitudes towards social face). Using a latent class analysis (LCA), this study classifies overall patterns of old people’s formal service utilization. Fourteen types of formal services were taken into account, including housework, voluntary support, transportation, home-delivered meals, and home-delivery medical care, elderly’s canteen and day-care center/respite care and so on. Structural equation modeling (SEM) was used to examine the direct effect of culture value on service use pattern, and the mediating effect of the enabling factors. Results: The LCA classified a hierarchical structure of service use patterns: multiple intended service use (N=69, 23%), selective intended service use (N=129, 23%), and light intended service use (N=352, 64%). Through SEM, after controlling predisposing factors and need factors, the results showed the significant direct effect of culture value on older people’s intended service use patterns. Enabling factors had a partial mediation effect on the relationship between culture value and the patterns. Conclusions and Implications: Differentiation of formal services may be important for meeting frail old people’s service needs and distributing program resources by identifying target populations for intervention, which may make reference to specific interventions to better support frail old people. Additionally, culture value had a unique direct effect on the intended service use patterns of frail old people in China, enriching our theoretical understanding of sources of culture value and their impacts. The findings also highlighted the mediation effects of enabling factors on the relationship between culture value factors and intended service use patterns. This study suggests that researchers and service providers should pay more attention to the important role of culture value factors in contributing to intended service use patterns and also be more sensitive to the mediating effect of enabling factors when discussing the relationship between culture value and the patterns.

Keywords: frail old people, intended service use pattern, culture value, enabling factors, contemporary China, latent class analysis

Procedia PDF Downloads 207
585 An in silico Approach for Exploring the Intercellular Communication in Cancer Cells

Authors: M. Cardenas-Garcia, P. P. Gonzalez-Perez

Abstract:

Intercellular communication is a necessary condition for cellular functions and it allows a group of cells to survive as a population. Throughout this interaction, the cells work in a coordinated and collaborative way which facilitates their survival. In the case of cancerous cells, these take advantage of intercellular communication to preserve their malignancy, since through these physical unions they can send signs of malignancy. The Wnt/β-catenin signaling pathway plays an important role in the formation of intercellular communications, being also involved in a large number of cellular processes such as proliferation, differentiation, adhesion, cell survival, and cell death. The modeling and simulation of cellular signaling systems have found valuable support in a wide range of modeling approaches, which cover a wide spectrum ranging from mathematical models; e.g., ordinary differential equations, statistical methods, and numerical methods– to computational models; e.g., process algebra for modeling behavior and variation in molecular systems. Based on these models, different simulation tools have been developed from mathematical ones to computational ones. Regarding cellular and molecular processes in cancer, its study has also found a valuable support in different simulation tools that, covering a spectrum as mentioned above, have allowed the in silico experimentation of this phenomenon at the cellular and molecular level. In this work, we simulate and explore the complex interaction patterns of intercellular communication in cancer cells using the Cellulat bioinformatics tool, a computational simulation tool developed by us and motivated by two key elements: 1) a biochemically inspired model of self-organizing coordination in tuple spaces, and 2) the Gillespie’s algorithm, a stochastic simulation algorithm typically used to mimic systems of chemical/biochemical reactions in an efficient and accurate way. The main idea behind the Cellulat simulation tool is to provide an in silico experimentation environment that complements and guides in vitro experimentation in intra and intercellular signaling networks. Unlike most of the cell signaling simulation tools, such as E-Cell, BetaWB and Cell Illustrator which provides abstractions to model only intracellular behavior, Cellulat is appropriate for modeling both intracellular signaling and intercellular communication, providing the abstractions required to model –and as a result, simulate– the interaction mechanisms that involve two or more cells, that is essential in the scenario discussed in this work. During the development of this work we made evident the application of our computational simulation tool (Cellulat) for the modeling and simulation of intercellular communication between normal and cancerous cells, and in this way, propose key molecules that may prevent the arrival of malignant signals to the cells that surround the tumor cells. In this manner, we could identify the significant role that has the Wnt/β-catenin signaling pathway in cellular communication, and therefore, in the dissemination of cancer cells. We verified, using in silico experiments, how the inhibition of this signaling pathway prevents that the cells that surround a cancerous cell are transformed.

Keywords: cancer cells, in silico approach, intercellular communication, key molecules, modeling and simulation

Procedia PDF Downloads 232
584 Safety Profile of Human Papillomavirus Vaccines: A Post-Licensure Analysis of the Vaccine Adverse Events Reporting System, 2007-2017

Authors: Giulia Bonaldo, Alberto Vaccheri, Ottavio D'Annibali, Domenico Motola

Abstract:

The Human Papilloma Virus (HPV) was shown to be the cause of different types of carcinomas, first of all of the cervical intraepithelial neoplasia. Since the early 80s to today, thanks first to the preventive screening campaigns (pap-test) and following to the introduction of HPV vaccines on the market; the number of new cases of cervical cancer has decreased significantly. The HPV vaccines currently approved are three: Cervarix® (HPV2 - virus type: 16 and 18), Gardasil® (HPV4 - 6, 11, 16, 18) and Gardasil 9® (HPV9 - 6, 11, 16, 18, 31, 33, 45, 52, 58), which all protect against the two high-risk HPVs (6, 11) that are mainly involved in cervical cancers. Despite the remarkable effectiveness of these vaccines has been demonstrated, in the recent years, there have been many complaints about their risk-benefit profile due to Adverse Events Following Immunization (AEFI). The purpose of this study is to provide a support about the ongoing discussion on the safety profile of HPV vaccines based on real life data deriving from spontaneous reports of suspected AEFIs collected in the Vaccine Adverse Events Reporting System (VAERS). VAERS is a freely-available national vaccine safety surveillance database of AEFI, co-administered by the Centers for Disease Control and Prevention (CDC) and Food and Drug Administration (FDA). We collected all the reports between January 2007 to December 2017 related to the HPV vaccines with a brand name (HPV2, HPV4, HPV9) or without (HPVX). A disproportionality analysis using Reporting Odds Ratio (ROR) with 95% confidence interval and p value ≤ 0.05 was performed. Over the 10-year period, 54889 reports of AEFI related to HPV vaccines reported in VAERS, corresponding to 224863 vaccine-event pairs, were retrieved. The highest number of reports was related to Gardasil (n = 42244), followed by Gardasil 9 (7212) and Cervarix (3904). The brand name of the HPV vaccine was not reported in 1529 cases. The two events more frequently reported and statistically significant for each vaccine were: dizziness (n = 5053) ROR = 1.28 (CI95% 1.24 – 1.31) and syncope (4808) ROR = 1.21 (1.17 – 1.25) for Gardasil. For Gardasil 9, injection site pain (305) ROR = 1.40 (1.25 – 1.57) and injection site erythema (297) ROR = 1.88 (1.67 – 2.10) and for Cervarix, headache (672) ROR = 1.14 (1.06 – 1.23) and loss of consciousness (528) ROR = 1.71 (1.57 – 1.87). In total, we collected 406 reports of death and 2461 cases of permanent disability in the ten-year period. The events consisting of incorrect vaccine storage or incorrect administration were not considered. The AEFI analysis showed that the most frequently reported events are non-serious and listed in the corresponding SmPCs. In addition to these, potential safety signals arose regarding less frequent and severe AEFIs that would deserve further investigation. This already happened with the referral of the European Medicines Agency (EMA) for the adverse events POTS (Postural Orthostatic Tachycardia Syndrome) and CRPS (Complex Regional Pain Syndrome) associated with anti-papillomavirus vaccines.

Keywords: adverse drug reactions, pharmacovigilance, safety, vaccines

Procedia PDF Downloads 139
583 Railway Composite Flooring Design: Numerical Simulation and Experimental Studies

Authors: O. Lopez, F. Pedro, A. Tadeu, J. Antonio, A. Coelho

Abstract:

The future of the railway industry lies in the innovation of lighter, more efficient and more sustainable trains. Weight optimizations in railway vehicles allow reducing power consumption and CO₂ emissions, increasing the efficiency of the engines and the maximum speed reached. Additionally, they reduce wear of wheels and rails, increase the space available for passengers, etc. Among the various systems that integrate railway interiors, the flooring system is one which has greater impact both on passenger safety and comfort, as well as on the weight of the interior systems. Due to the high weight saving potential, relative high mechanical resistance, good acoustic and thermal performance, ease of modular design, cost-effectiveness and long life, the use of new sustainable composite materials and panels provide the latest innovations for competitive solutions in the development of flooring systems. However, one of the main drawbacks of the flooring systems is their relatively poor resistance to point loads. Point loads in railway interiors can be caused by passengers or by components fixed to the flooring system, such as seats and restraint systems, handrails, etc. In this way, they can originate higher fatigue solicitations under service loads or zones with high stress concentrations under exceptional loads (higher longitudinal, transverse and vertical accelerations), thus reducing its useful life. Therefore, to verify all the mechanical and functional requirements of the flooring systems, many physical prototypes would be created during the design phase, with all of the high costs associated with it. Nowadays, the use of virtual prototyping methods by computer-aided design (CAD) and computer-aided engineering (CAE) softwares allow validating a product before committing to making physical test prototypes. The scope of this work was to current computer tools and integrate the processes of innovation, development, and manufacturing to reduce the time from design to finished product and optimise the development of the product for higher levels of performance and reliability. In this case, the mechanical response of several sandwich panels with different cores, polystyrene foams, and composite corks, were assessed, to optimise the weight and the mechanical performance of a flooring solution for railways. Sandwich panels with aluminum face sheets were tested to characterise its mechanical performance and determine the polystyrene foam and cork properties when used as inner cores. Then, a railway flooring solution was fully modelled (including the elastomer pads to provide the required vibration isolation from the car body) and perform structural simulations using FEM analysis to comply all the technical product specifications for the supply of a flooring system. Zones with high stress concentrations are studied and tested. The influence of vibration modes on the comfort level and stability is discussed. The information obtained with the computer tools was then completed with several mechanical tests performed on some solutions, and on specific components. The results of the numerical simulations and experimental campaign carried out are presented in this paper. This research work was performed as part of the POCI-01-0247-FEDER-003474 (coMMUTe) Project funded by Portugal 2020 through COMPETE 2020.

Keywords: cork agglomerate core, mechanical performance, numerical simulation, railway flooring system

Procedia PDF Downloads 159
582 Removal of Heavy Metals by Ultrafiltration Assisted with Chitosan or Carboxy-Methyl Cellulose

Authors: Boukary Lam, Sebastien Deon, Patrick Fievet, Nadia Crini, Gregorio Crini

Abstract:

Treatment of heavy metal-contaminated industrial wastewater has become a major challenge over the last decades. Conventional processes for the treatment of metal-containing effluents do not always simultaneously satisfy both legislative and economic criteria. In this context, coupling of processes can then be a promising alternative to the conventional approaches used by industry. The polymer-assisted ultrafiltration (PAUF) process is one of these coupling processes. Its principle is based on a sequence of steps with reaction (e.g., complexation) between metal ions and a polymer and a step involving the rejection of the formed species by means of a UF membrane. Unlike free ions, which can cross the UF membrane due to their small size, the polymer/ion species, the size of which is larger than pore size, are rejected. The PAUF process was deeply investigated herein in the case of removal of nickel ions by adding chitosan and carboxymethyl cellulose (CMC). Experiments were conducted with synthetic solutions containing 1 to 100 ppm of nickel ions with or without the presence of NaCl (0.05 to 0.2 M), and an industrial discharge water (containing several metal ions) with and without polymer. Chitosan with a molecular weight of 1.8×105 g mol⁻¹ and a degree of acetylation close to 15% was used. CMC with a degree of substitution of 0.7 and a molecular weight of 9×105 g mol⁻¹ was employed. Filtration experiments were performed under cross-flow conditions with a filtration cell equipped with a polyamide thin film composite flat-sheet membrane (3.5 kDa). Without the step of polymer addition, it was found that nickel rejection decreases from 80 to 0% with increasing metal ion concentration and salt concentration. This behavior agrees qualitatively with the Donnan exclusion principle: the increase in the electrolyte concentration screens the electrostatic interaction between ions and the membrane fixed the charge, which decreases their rejection. It was shown that addition of a sufficient amount of polymer (greater than 10⁻² M of monomer unit) can offset this decrease and allow good metal removal. However, the permeation flux was found to be somewhat reduced due to the increase in osmotic pressure and viscosity. It was also highlighted that the increase in pH (from 3 to 9) has a strong influence on removal performances: the higher pH value, the better removal performance. The two polymers have shown similar performance enhancement at natural pH. However, chitosan has proved more efficient in slightly basic conditions (above its pKa) whereas CMC has demonstrated very weak rejection performances when pH is below its pKa. In terms of metal rejection, chitosan is thus probably the better option for basic or strongly acid (pH < 4) conditions. Nevertheless, CMC should probably be preferred to chitosan in natural conditions (5 < pH < 8) since its impact on the permeation flux is less significant. Finally, ultrafiltration of an industrial discharge water has shown that the increase in metal ion rejection induced by the polymer addition is very low due to the competing phenomenon between the various ions present in the complex mixture.

Keywords: carboxymethyl cellulose, chitosan, heavy metals, nickel ion, polymer-assisted ultrafiltration

Procedia PDF Downloads 136
581 The Processing of Context-Dependent and Context-Independent Scalar Implicatures

Authors: Liu Jia’nan

Abstract:

The default accounts hold the view that there exists a kind of scalar implicature which can be processed without context and own a psychological privilege over other scalar implicatures which depend on context. In contrast, the Relevance Theorist regards context as a must because all the scalar implicatures have to meet the need of relevance in discourse. However, in Katsos, the experimental results showed: Although quantitatively the adults rejected under-informative utterance with lexical scales (context-independent) and the ad hoc scales (context-dependent) at almost the same rate, adults still regarded the violation of utterance with lexical scales much more severe than with ad hoc scales. Neither default account nor Relevance Theory can fully explain this result. Thus, there are two questionable points to this result: (1) Is it possible that the strange discrepancy is due to other factors instead of the generation of scalar implicature? (2) Are the ad hoc scales truly formed under the possible influence from mental context? Do the participants generate scalar implicatures with ad hoc scales instead of just comparing semantic difference among target objects in the under- informative utterance? In my Experiment 1, the question (1) will be answered by repetition of Experiment 1 by Katsos. Test materials will be showed by PowerPoint in the form of pictures, and each procedure will be done under the guidance of a tester in a quiet room. Our Experiment 2 is intended to answer question (2). The test material of picture will be transformed into the literal words in DMDX and the target sentence will be showed word-by-word to participants in the soundproof room in our lab. Reading time of target parts, i.e. words containing scalar implicatures, will be recorded. We presume that in the group with lexical scale, standardized pragmatically mental context would help generate scalar implicature once the scalar word occurs, which will make the participants hope the upcoming words to be informative. Thus if the new input after scalar word is under-informative, more time will be cost for the extra semantic processing. However, in the group with ad hoc scale, scalar implicature may hardly be generated without the support from fixed mental context of scale. Thus, whether the new input is informative or not does not matter at all, and the reading time of target parts will be the same in informative and under-informative utterances. People’s mind may be a dynamic system, in which lots of factors would co-occur. If Katsos’ experimental result is reliable, will it shed light on the interplay of default accounts and context factors in scalar implicature processing? We might be able to assume, based on our experiments, that one single dominant processing paradigm may not be plausible. Furthermore, in the processing of scalar implicature, the semantic interpretation and the pragmatic interpretation may be made in a dynamic interplay in the mind. As to the lexical scale, the pragmatic reading may prevail over the semantic reading because of its greater exposure in daily language use, which may also lead the possible default or standardized paradigm override the role of context. However, those objects in ad hoc scale are not usually treated as scalar membership in mental context, and thus lexical-semantic association of the objects may prevent their pragmatic reading from generating scalar implicature. Only when the sufficient contextual factors are highlighted, can the pragmatic reading get privilege and generate scalar implicature.

Keywords: scalar implicature, ad hoc scale, dynamic interplay, default account, Mandarin Chinese processing

Procedia PDF Downloads 294
580 Profiling of Bacterial Communities Present in Feces, Milk, and Blood of Lactating Cows Using 16S rRNA Metagenomic Sequencing

Authors: Khethiwe Mtshali, Zamantungwa T. H. Khumalo, Stanford Kwenda, Ismail Arshad, Oriel M. M. Thekisoe

Abstract:

Ecologically, the gut, mammary glands and bloodstream consist of distinct microbial communities of commensals, mutualists and pathogens, forming a complex ecosystem of niches. The by-products derived from these body sites i.e. faeces, milk and blood, respectively, have many uses in rural communities where they aid in the facilitation of day-to-day household activities and occasional rituals. Thus, although livestock rearing plays a vital role in the sustenance of the livelihoods of rural communities, it may serve as a potent reservoir of different pathogenic organisms that could have devastating health and economic implications. This study aimed to simultaneously explore the microbial profiles of corresponding faecal, milk and blood samples from lactating cows using 16S rRNA metagenomic sequencing. Bacterial communities were inferred through the Divisive Amplicon Denoising Algorithm 2 (DADA2) pipeline coupled with SILVA database v138. All downstream analyses were performed in R v3.6.1. Alpha-diversity metrics showed significant differences between faeces and blood, faeces and milk, but did not vary significantly between blood and milk (Kruskal-Wallis, P < 0.05). Beta-diversity metrics on Principal Coordinate Analysis (PCoA) and Non-Metric Dimensional Scaling (NMDS) clustered samples by type, suggesting that microbial communities of the studied niches are significantly different (PERMANOVA, P < 0.05). A number of taxa were significantly differentially abundant (DA) between groups based on the Wald test implemented in the DESeq2 package (Padj < 0.01). The majority of the DA taxa were significantly enriched in faeces than in milk and blood, except for the genus Anaplasma, which was significantly enriched in blood and was, in turn, the most abundant taxon overall. A total of 30 phyla, 74 classes, 156 orders, 243 families and 408 genera were obtained from the overall analysis. The most abundant phyla obtained between the three body sites were Firmicutes, Bacteroidota, and Proteobacteria. A total of 58 genus-level taxa were simultaneously detected between the sample groups, while bacterial signatures of at least 8 of these occurred concurrently in corresponding faeces, milk and blood samples from the same group of animals constituting a pool. The important taxa identified in this study could be categorized into four potentially pathogenic clusters: i) arthropod-borne; ii) food-borne and zoonotic; iii) mastitogenic and; iv) metritic and abortigenic. This study provides insight into the microbial composition of bovine faeces, milk, and blood and its extent of overlapping. It further highlights the potential risk of disease occurrence and transmission between the animals and the inhabitants of the sampled rural community, pertaining to their unsanitary practices associated with the use of cattle by-products.

Keywords: microbial profiling, 16S rRNA, NGS, feces, milk, blood, lactating cows, small-scale farmers

Procedia PDF Downloads 86
579 An Evolutionary Approach for Automated Optimization and Design of Vivaldi Antennas

Authors: Sahithi Yarlagadda

Abstract:

The design of antenna is constrained by mathematical and geometrical parameters. Though there are diverse antenna structures with wide range of feeds yet, there are many geometries to be tried, which cannot be customized into predefined computational methods. The antenna design and optimization qualify to apply evolutionary algorithmic approach since the antenna parameters weights dependent on geometric characteristics directly. The evolutionary algorithm can be explained simply for a given quality function to be maximized. We can randomly create a set of candidate solutions, elements of the function's domain, and apply the quality function as an abstract fitness measure. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and permutation to them. In conventional approach, the quality function is unaltered for any iteration. But the antenna parameters and geometries are wide to fit into single function. So, the weight coefficients are obtained for all possible antenna electrical parameters and geometries; the variation is learnt by mining the data obtained for an optimized algorithm. The weight and covariant coefficients of corresponding parameters are logged for learning and future use as datasets. This paper drafts an approach to obtain the requirements to study and methodize the evolutionary approach to automated antenna design for our past work on Vivaldi antenna as test candidate. The antenna parameters like gain, directivity, etc. are directly caged by geometries, materials, and dimensions. The design equations are to be noted here and valuated for all possible conditions to get maxima and minima for given frequency band. The boundary conditions are thus obtained prior to implementation, easing the optimization. The implementation mainly aimed to study the practical computational, processing, and design complexities that incur while simulations. HFSS is chosen for simulations and results. MATLAB is used to generate the computations, combinations, and data logging. MATLAB is also used to apply machine learning algorithms and plotting the data to design the algorithm. The number of combinations is to be tested manually, so HFSS API is used to call HFSS functions from MATLAB itself. MATLAB parallel processing tool box is used to run multiple simulations in parallel. The aim is to develop an add-in to antenna design software like HFSS, CSTor, a standalone application to optimize pre-identified common parameters of wide range of antennas available. In this paper, we have used MATLAB to calculate Vivaldi antenna parameters like slot line characteristic impedance, impedance of stripline, slot line width, flare aperture size, dielectric and K means, and Hamming window are applied to obtain the best test parameters. HFSS API is used to calculate the radiation, bandwidth, directivity, and efficiency, and data is logged for applying the Evolutionary genetic algorithm in MATLAB. The paper demonstrates the computational weights and Machine Learning approach for automated antenna optimizing for Vivaldi antenna.

Keywords: machine learning, Vivaldi, evolutionary algorithm, genetic algorithm

Procedia PDF Downloads 88
578 Molecular Identification of Camel Tick and Investigation of Its Natural Infection by Rickettsia and Borrelia in Saudi Arabia

Authors: Reem Alajmi, Hind Al Harbi, Tahany Ayaad, Zainab Al Musawi

Abstract:

Hard ticks Hyalomma spp. (family: Ixodidae) are obligate ectoparasite in their all life stages on some domestic animals mainly camels and cattle. Ticks may lead to many economic and public health problems because of their blood feeding behavior. Also, they act as vectors for many bacterial, viral and protozoan agents which may cause serious diseases such as tick-born encephalitis, Rocky-mountain spotted fever, Q-fever and Lyme disease which can affect human and/or animals. In the present study, molecular identification of ticks that attack camels in Riyadh region, Saudi Arabia based on the partial sequence of mitochondrial 16s rRNA gene was applied. Also, the present study aims to detect natural infections of collected camel ticks with Rickessia spp. and Borelia spp. using PCR/hybridization of Citrate synthase encoding gene present in bacterial cells. Hard ticks infesting camels were collected from different camels located in a farm in Riyadh region, Saudi Arabia. Results of the present study showed that the collected specimens belong to two species: Hyalomma dromedari represent 99% of the identified specimens and Hyalomma marginatum which account for 1 % of identified ticks. The molecular identification was made through blasting the obtained sequence of this study with sequences already present and identified in GeneBank. All obtained sequences of H. dromedarii specimens showed 97-100% identity with the same gene sequence of the same species (Accession # L34306.1) which was used as a reference. Meanwhile, no intraspecific variations of H. marginatum mesured because only one specimen was collected. Results also had shown that the intraspecific variability between individuals of H. dromedarii obtained in 92 % of samples ranging from 0.2- 6.6%, while the remaining 7 % of the total samples of H. dromedarii showed about 10.3 % individual differences. However, the interspecific variability between H. dromedarii and H. marginatum was approximately 18.3 %. On the other hand, by using the technique of PCR/hybridization, we could detect natural infection of camel ticks with Rickettsia spp. and Borrelia spp. Results revealed the natural presence of both bacteria in collected ticks. Rickettsial spp. infection present in 29% of collected ticks, while 35% of collected specimen were infected with Borrelia spp. The valuable results obtained from the present study are a new record for the molecular identification of camel ticks in Riyadh, Saudi Arabia and their natural infection with both Rickettsia spp. and Borrelia spp. These results may help scientists to provide a good and direct control strategy of ticks in order to protect one of the most important economic animals which are camels. Also results of this project spotlight on the disease that might be transmitted by ticks to put out a direct protective plan to prevent spreading of these dangerous agents. Further molecular studies are needed to confirm the results of the present study by using other mitochondrial and nuclear genes for tick identification.

Keywords: Camel ticks, Rickessia spp. , Borelia spp. , mitochondrial 16s rRNA gene

Procedia PDF Downloads 254
577 The Seller’s Sense: Buying-Selling Perspective Affects the Sensitivity to Expected-Value Differences

Authors: Taher Abofol, Eldad Yechiam, Thorsten Pachur

Abstract:

In four studies, we examined whether seller and buyers differ not only in subjective price levels for objects (i.e., the endowment effect) but also in their relative accuracy given objects varying in expected value. If, as has been proposed, sellers stand to accrue a more substantial loss than buyers do, then their pricing decisions should be more sensitive to expected-value differences between objects. This is implied by loss aversion due to the steeper slope of prospect theory’s value function for losses than for gains, as well as by loss attention account, which posits that losses increase the attention invested in a task. Both accounts suggest that losses increased sensitivity to relative values of different objects, which should result in better alignment of pricing decisions to the objective value of objects on the part of sellers. Under loss attention, this characteristic should only emerge under certain boundary conditions. In Study 1 a published dataset was reanalyzed, in which 152 participants indicated buying or selling prices for monetary lotteries with different expected values. Relative EV sensitivity was calculated for participants as the Spearman rank correlation between their pricing decisions for each of the lotteries and the lotteries' expected values. An ANOVA revealed a main effect of perspective (sellers versus buyers), F(1,150) = 85.3, p < .0001 with greater EV sensitivity for sellers. Study 2 examined the prediction (implied by loss attention) that the positive effect of losses on performance emerges particularly under conditions of time constraints. A published dataset was reanalyzed, where 84 participants were asked to provide selling and buying prices for monetary lotteries in three deliberations time conditions (5, 10, 15 seconds). As in Study 1, an ANOVA revealed greater EV sensitivity for sellers than for buyers, F(1,82) = 9.34, p = .003. Importantly, there was also an interaction of perspective by deliberation time. Post-hoc tests revealed that there were main effects of perspective both in the condition with 5s deliberation time, and in the condition with 10s deliberation time, but not in the 15s condition. Thus, sellers’ EV-sensitivity advantage disappeared with extended deliberation. Study 3 replicated the design of study 1 but administered the task three times to test if the effect decays with repeated presentation. The results showed that the difference between buyers and sellers’ EV sensitivity was replicated in repeated task presentations. Study 4 examined the loss attention prediction that EV-sensitivity differences can be eliminated by manipulations that reduce the differential attention investment of sellers and buyers. This was carried out by randomly mixing selling and buying trials for each participant. The results revealed no differences in EV sensitivity between selling and buying trials. The pattern of results is consistent with an attentional resource-based account of the differences between sellers and buyers. Thus, asking people to price, an object from a seller's perspective rather than the buyer's improves the relative accuracy of pricing decisions; subtle changes in the framing of one’s perspective in a trading negotiation may improve price accuracy.

Keywords: decision making, endowment effect, pricing, loss aversion, loss attention

Procedia PDF Downloads 322
576 Antimicrobial and Anti-Biofilm Activity of Non-Thermal Plasma

Authors: Jan Masak, Eva Kvasnickova, Vladimir Scholtz, Olga Matatkova, Marketa Valkova, Alena Cejkova

Abstract:

Microbial colonization of medical instruments, catheters, implants, etc. is a serious problem in the spread of nosocomial infections. Biofilms exhibit enormous resistance to environment. The resistance of biofilm populations to antibiotic or biocides often increases by two to three orders of magnitude in comparison with suspension populations. Subjects of interests are substances or physical processes that primarily cause the destruction of biofilm, while the released cells can be killed by existing antibiotics. In addition, agents that do not have a strong lethal effect do not cause such a significant selection pressure to further enhance resistance. Non-thermal plasma (NTP) is defined as neutral, ionized gas composed of particles (photons, electrons, positive and negative ions, free radicals and excited or non-excited molecules) which are in permanent interaction. In this work, the effect of NTP generated by the cometary corona with a metallic grid on the formation and stability of biofilm and metabolic activity of cells in biofilm was studied. NTP was applied on biofilm populations of Staphylococcus epidermidis DBM 3179, Pseudomonas aeruginosa DBM 3081, DBM 3777, ATCC 15442 and ATCC 10145, Escherichia coli DBM 3125 and Candida albicans DBM 2164 grown on solid media on Petri dishes and on the titanium alloy (Ti6Al4V) surface used for the production joint replacements. Erythromycin (for S. epidermidis), polymyxin B (for E. coli and P. aeruginosa), amphotericin B (for C. albicans) and ceftazidime (for P. aeruginosa) were used to study the combined effect of NTP and antibiotics. Biofilms were quantified by crystal violet assay. Metabolic activity of the cells in biofilm was measured using MTT (3-[4,5-dimethylthiazol-2-yl]-2,5 diphenyl tetrazolium bromide) colorimetric test based on the reduction of MTT into formazan by the dehydrogenase system of living cells. Fluorescence microscopy was applied to visualize the biofilm on the surface of the titanium alloy; SYTO 13 was used as a fluorescence probe to stain cells in the biofilm. It has been shown that biofilm populations of all studied microorganisms are very sensitive to the type of used NTP. The inhibition zone of biofilm recorded after 60 minutes exposure to NTP exceeded 20 cm², except P. aeruginosa DBM 3777 and ATCC 10145, where it was about 9 cm². Also metabolic activity of cells in biofilm differed for individual microbial strains. High sensitivity to NTP was observed in S. epidermidis, in which the metabolic activity of biofilm decreased after 30 minutes of NTP exposure to 15% and after 60 minutes to 1%. Conversely, the metabolic activity of cells of C. albicans decreased to 53% after 30 minutes of NTP exposure. Nevertheless, this result can be considered very good. Suitable combinations of exposure time of NTP and the concentration of antibiotic achieved in most cases a remarkable synergic effect on the reduction of the metabolic activity of the cells of the biofilm. For example, in the case of P. aeruginosa DBM 3777, a combination of 30 minutes of NTP with 1 mg/l of ceftazidime resulted in a decrease metabolic activity below 4%.

Keywords: anti-biofilm activity, antibiotic, non-thermal plasma, opportunistic pathogens

Procedia PDF Downloads 163