Search results for: optimized asset allocation
262 Harnessing the Power of Mixed Ligand Complexes: Enhancing Antimicrobial Activities with Thiosemicarbazones
Authors: Sakshi Gupta, Seema Joshi
Abstract:
Thiosemicarbazones (TSCs) have garnered significant attention in coordination chemistry due to their versatile coordination modes and pharmacological properties. Mixed ligand complexes of TSCs represent a promising area of research, offering enhanced antimicrobial activities compared to their parent compounds. This review provides an overview of the synthesis, characterization, and antimicrobial properties of mixed ligand complexes incorporating thiosemicarbazones. The synthesis of mixed ligand complexes typically involves the reaction of a metal salt with TSC ligands and additional ligands, such as nitrogen- or oxygen-based ligands. Various transition metals, including copper, nickel, and cobalt, have been employed to form mixed ligand complexes with TSCs. Characterization techniques such as spectroscopy, X-ray crystallography, and elemental analysis are commonly utilized to confirm the structures of these complexes. One of the key advantages of mixed ligand complexes is their enhanced antimicrobial activity compared to pure TSC compounds. The synergistic effect between the TSC ligands and additional ligands contributes to increased efficacy, possibly through improved metal-ligand interactions or enhanced membrane permeability. Furthermore, mixed ligand complexes offer the potential for selective targeting of microbial species while minimizing toxicity to mammalian cells. This selectivity arises from the specific interactions between the metal center, TSC ligands, and biological targets within microbial cells. Such targeted antimicrobial activity is crucial for developing effective treatments with minimal side effects. Moreover, the versatility of mixed ligand complexes allows for the design of tailored antimicrobial agents with optimized properties. By varying the metal ion, TSC ligands, and additional ligands, researchers can fine-tune the physicochemical properties and biological activities of these complexes. This tunability opens avenues for the development of novel antimicrobial agents with improved efficacy and reduced resistance. In conclusion, mixed ligand complexes of thiosemicarbazones represent a promising class of compounds with potent antimicrobial activities. Further research in this field holds great potential for the development of novel therapeutic agents to combat microbial infections effectively.Keywords: metal complex, thiosemicarbazones, mixed ligand, selective targeting, antimicrobial activity
Procedia PDF Downloads 60261 Urban Flood Resilience Comprehensive Assessment of "720" Rainstorm in Zhengzhou Based on Multiple Factors
Authors: Meiyan Gao, Zongmin Wang, Haibo Yang, Qiuhua Liang
Abstract:
Under the background of global climate change and rapid development of modern urbanization, the frequency of climate disasters such as extreme precipitation in cities around the world is gradually increasing. In this paper, Hi-PIMS model is used to simulate the "720" flood in Zhengzhou, and the continuous stages of flood resilience are determined with the urban flood stages are divided. The flood resilience curve under the influence of multiple factors were determined and the urban flood toughness was evaluated by combining the results of resilience curves. The flood resilience of urban unit grid was evaluated based on economy, population, road network, hospital distribution and land use type. Firstly, the rainfall data of meteorological stations near Zhengzhou and the remote sensing rainfall data from July 17 to 22, 2021 were collected. The Kriging interpolation method was used to expand the rainfall data of Zhengzhou. According to the rainfall data, the flood process generated by four rainfall events in Zhengzhou was reproduced. Based on the results of the inundation range and inundation depth in different areas, the flood process was divided into four stages: absorption, resistance, overload and recovery based on the once in 50 years rainfall standard. At the same time, based on the levels of slope, GDP, population, hospital affected area, land use type, road network density and other aspects, the resilience curve was applied to evaluate the urban flood resilience of different regional units, and the difference of flood process of different precipitation in "720" rainstorm in Zhengzhou was analyzed. Faced with more than 1,000 years of rainstorm, most areas are quickly entering the stage of overload. The influence levels of factors in different areas are different, some areas with ramps or higher terrain have better resilience, and restore normal social order faster, that is, the recovery stage needs shorter time. Some low-lying areas or special terrain, such as tunnels, will enter the overload stage faster in the case of heavy rainfall. As a result, high levels of flood protection, water level warning systems and faster emergency response are needed in areas with low resilience and high risk. The building density of built-up area, population of densely populated area and road network density all have a certain negative impact on urban flood resistance, and the positive impact of slope on flood resilience is also very obvious. While hospitals can have positive effects on medical treatment, they also have negative effects such as population density and asset density when they encounter floods. The result of a separate comparison of the unit grid of hospitals shows that the resilience of hospitals in the distribution range is low when they encounter floods. Therefore, in addition to improving the flood resistance capacity of cities, through reasonable planning can also increase the flood response capacity of cities. Changes in these influencing factors can further improve urban flood resilience, such as raise design standards and the temporary water storage area when floods occur, train the response speed of emergency personnel and adjust emergency support equipment.Keywords: urban flood resilience, resilience assessment, hydrodynamic model, resilience curve
Procedia PDF Downloads 40260 Structural Development and Multiscale Design Optimization of Additively Manufactured Unmanned Aerial Vehicle with Blended Wing Body Configuration
Authors: Malcolm Dinovitzer, Calvin Miller, Adam Hacker, Gabriel Wong, Zach Annen, Padmassun Rajakareyar, Jordan Mulvihill, Mostafa S.A. ElSayed
Abstract:
The research work presented in this paper is developed by the Blended Wing Body (BWB) Unmanned Aerial Vehicle (UAV) team, a fourth-year capstone project at Carleton University Department of Mechanical and Aerospace Engineering. Here, a clean sheet UAV with BWB configuration is designed and optimized using Multiscale Design Optimization (MSDO) approach employing lattice materials taking into consideration design for additive manufacturing constraints. The BWB-UAV is being developed with a mission profile designed for surveillance purposes with a minimum payload of 1000 grams. To demonstrate the design methodology, a single design loop of a sample rib from the airframe is shown in details. This includes presentation of the conceptual design, materials selection, experimental characterization and residual thermal stress distribution analysis of additively manufactured materials, manufacturing constraint identification, critical loads computations, stress analysis and design optimization. A dynamic turbulent critical load case was identified composed of a 1-g static maneuver with an incremental Power Spectral Density (PSD) gust which was used as a deterministic design load case for the design optimization. 2D flat plate Doublet Lattice Method (DLM) was used to simulate aerodynamics in the aeroelastic analysis. The aerodynamic results were verified versus a 3D CFD analysis applying Spalart-Allmaras and SST k-omega turbulence to the rigid UAV and vortex lattice method applied in the OpenVSP environment. Design optimization of a single rib was conducted using topology optimization as well as MSDO. Compared to a solid rib, weight savings of 36.44% and 59.65% were obtained for the topology optimization and the MSDO, respectively. These results suggest that MSDO is an acceptable alternative to topology optimization in weight critical applications while preserving the functional requirements.Keywords: blended wing body, multiscale design optimization, additive manufacturing, unmanned aerial vehicle
Procedia PDF Downloads 378259 Novel p22-Monoclonal Antibody Based Blocking ELISA for the Detection of African Swine Fever Virus Antibodies in Serum
Authors: Ghebremedhin Tsegay, Weldu Tesfagaber, Yuanmao Zhu, Xijun He, Wan Wang, Zhenjiang Zhang, Encheng Sun, Jinya Zhang, Yuntao Guan, Fang Li, Renqiang Liu, Zhigao Bu, Dongming Zhao*
Abstract:
African swine fever (ASF) is a highly infectious viral disease of pigs, resulting in significant economic loss worldwide. As there is no approved vaccines and treatments, the control of ASF entirely depends on early diagnosis and culling of infected pigs. Thus, highly specific and sensitive diagnostic assays are required for accurate and early diagnosis of ASF virus (ASFV). Currently, only a few recombinant proteins have been tested and validated for use as reagents in ASF diagnostic assays. The most promising ones for ASFV antibody detection were p72, p30, p54, and pp62. So far, three ELISA kits based on these recombinant proteins have been commercialized. Due to the complex nature of the virus and variety forms of the disease, robust serodiagnostic assays are still required. ASFV p22 protein, encoded by KP177R gene, is located in the inner membrane of viral particle and appeared transiently in the plasma membrane early after virus infection. The p22 protein interacts with numerous cellular proteins, involved in processes of phagocytosis and endocytosis through different cellular pathways. However, p22 does not seem to be involved in virus replication or swine pathogenicity. In this study, E.coli expressed recombinant p22 protein was used to generate a monoclonal antibody (mAb), and its potential use for the development of blocking ELISA (bELISA) was evaluated. A total of 806 pig serum samples were tested to evaluate the bELISA. Acording the ROC (Reciever operating chracteristic) analysis, 100% sensitivity and 98.10% of specificity was recorded when the PI cut-off value was set at 47%. The novel assay was able to detect the antibodies as early as 9 days post infection. Finaly, a highly sensitive, specific and rapid novel p22-mAb based bELISA assay was developed, and optimized for detection of antibodies against genotype I and II ASFVs. It is a promising candidate for an early and acurate detection of the antibodies and is highly expected to have a valuable role in the containment and prevention of ASF.Keywords: ASFV, blocking ELISA, diagnosis, monoclonal antibodies, sensitivity, specificity
Procedia PDF Downloads 77258 Hear Me: The Learning Experience on “Zoom” of Students With Deafness or Hard of Hearing Impairments
Authors: H. Weigelt-Marom
Abstract:
Over the years and up to the arousal of the COVID-19 pandemic, deaf or hard of hearing students studying in higher education institutions, participated lectures on campus using hearing aids and strategies adapted for frontal learning in a classroom. Usually, these aids were well known to them from their earlier study experience in school. However, the transition to online lessons, due to the latest pandemic, led deaf or hard of hearing students to study outside of their physical, well known learning environment. The change of learning environment and structure rose new challenges for these students. The present study examined the learning experience, limitations, challenges and benefits regarding learning online with lecture and classmates via the “Zoom” video conference program, among deaf or hard of hearing students in academia setting. In addition, emotional and social aspects related to learning in general versus the “Zoom” were examined. The study included 18 students diagnosed as deaf or hard of hearing, studying in various higher education institutions in Israel. All students had experienced lessons on the “Zoom”. Following allocation of the group study by the deaf and hard of hearing non-profit organization “Ma’agalei Shema”, and receiving the participants inform of consent, students were requested to answer a google form questioner and participate in an interview. The questioner included background information (e.g., age, year of studying, faculty etc.), level of computer literacy, and level of hearing and forms of communication (e.g., lip reading, sign language etc.). The interviews included a one on one, semi-structured, in-depth interview, conducted by the main researcher of the study (interview duration: up to 60 minutes). The interviews were held on “ZOOM” using specific adaptations for each interviewee: clear face screen of the interviewer for lip and face reading, and/ or professional sign language or live text transcript of the conversation. Additionally, interviewees used their audio devices if needed. Questions regarded: learning experience, difficulties and advantages studying using “Zoom”, learning in a classroom versus on “Zoom”, and questions concerning emotional and social aspects related to learning. Thematic analysis of the interviews revealed severe difficulties regarding the ability of deaf or hard of hearing students to comprehend during ”Zoom“ lessons without adoptive aids. For example, interviewees indicated difficulties understanding “Zoom” lessons due to their inability to use hearing devices commonly used by them in the classroom (e.g., FM systems). 80% indicated that they could not comprehend “Zoom” lessons since they could not see the lectures face, either because lectures did not agree to open their cameras or, either because they did not keep a straight forward clear face appearance while teaching. However, not all descriptions regarded learning via the “zoom” were negative. For example, 20% reported the recording of “Zoom” lessons as a main advantage. Enabling then to repeatedly watch the lessons at their own pace, mostly assisted by friends and family to translate the audio output into an accessible input. These finding and others regarding the learning experience of the group study on the “Zoom”, as well as their recommendation to enable deaf or hard of hearing students to study inclusively online, will be presented at the conference.Keywords: deaf or hard of hearing, learning experience, Zoom, qualitative research
Procedia PDF Downloads 116257 A Model for Teaching Arabic Grammar in Light of the Common European Framework of Reference for Languages
Authors: Erfan Abdeldaim Mohamed Ahmed Abdalla
Abstract:
The complexity of Arabic grammar poses challenges for learners, particularly in relation to its arrangement, classification, abundance, and bifurcation. The challenge at hand is a result of the contextual factors that gave rise to the grammatical rules in question, as well as the pedagogical approach employed at the time, which was tailored to the needs of learners during that particular historical period. Consequently, modern-day students encounter this same obstacle. This requires a thorough examination of the arrangement and categorization of Arabic grammatical rules based on particular criteria, as well as an assessment of their objectives. Additionally, it is necessary to identify the prevalent and renowned grammatical rules, as well as those that are infrequently encountered, obscure and disregarded. This paper presents a compilation of grammatical rules that require arrangement and categorization in accordance with the standards outlined in the Common European Framework of Reference for Languages (CEFR). In addition to facilitating comprehension of the curriculum, accommodating learners' requirements, and establishing the fundamental competencies for achieving proficiency in Arabic, it is imperative to ascertain the conventions that language learners necessitate in alignment with explicitly delineated benchmarks such as the CEFR criteria. The aim of this study is to reduce the quantity of grammatical rules that are typically presented to non-native Arabic speakers in Arabic textbooks. This reduction is expected to enhance the motivation of learners to continue their Arabic language acquisition and to approach the level of proficiency of native speakers. The primary obstacle faced by learners is the intricate nature of Arabic grammar, which poses a significant challenge in the realm of study. The proliferation and complexity of regulations evident in Arabic language textbooks designed for individuals who are not native speakers is noteworthy. The inadequate organisation and delivery of the material create the impression that the grammar is being imparted to a student with the intention of memorising "Alfiyyat-Ibn-Malik." Consequently, the sequence of grammatical rules instruction was altered, with rules originally intended for later instruction being presented first and those intended for earlier instruction being presented subsequently. Students often focus on learning grammatical rules that are not necessarily required while neglecting the rules that are commonly used in everyday speech and writing. Non-Arab students are taught Arabic grammar chapters that are infrequently utilised in Arabic literature and may be a topic of debate among grammarians. The aforementioned findings are derived from the statistical analysis and investigations conducted by the researcher, which will be disclosed in due course of the research. To instruct non-Arabic speakers on grammatical rules, it is imperative to discern the most prevalent grammatical frameworks in grammar manuals and linguistic literature (study sample). The present proposal suggests the allocation of grammatical structures across linguistic levels, taking into account the guidelines of the CEFR, as well as the grammatical structures that are necessary for non-Arabic-speaking learners to generate a modern, cohesive, and comprehensible language.Keywords: grammar, Arabic, functional, framework, problems, standards, statistical, popularity, analysis
Procedia PDF Downloads 94256 Performance of High Efficiency Video Codec over Wireless Channels
Authors: Mohd Ayyub Khan, Nadeem Akhtar
Abstract:
Due to recent advances in wireless communication technologies and hand-held devices, there is a huge demand for video-based applications such as video surveillance, video conferencing, remote surgery, Digital Video Broadcast (DVB), IPTV, online learning courses, YouTube, WhatsApp, Instagram, Facebook, Interactive Video Games. However, the raw videos posses very high bandwidth which makes the compression a must before its transmission over the wireless channels. The High Efficiency Video Codec (HEVC) (also called H.265) is latest state-of-the-art video coding standard developed by the Joint effort of ITU-T and ISO/IEC teams. HEVC is targeted for high resolution videos such as 4K or 8K resolutions that can fulfil the recent demands for video services. The compression ratio achieved by the HEVC is twice as compared to its predecessor H.264/AVC for same quality level. The compression efficiency is generally increased by removing more correlation between the frames/pixels using complex techniques such as extensive intra and inter prediction techniques. As more correlation is removed, the chances of interdependency among coded bits increases. Thus, bit errors may have large effect on the reconstructed video. Sometimes even single bit error can lead to catastrophic failure of the reconstructed video. In this paper, we study the performance of HEVC bitstream over additive white Gaussian noise (AWGN) channel. Moreover, HEVC over Quadrature Amplitude Modulation (QAM) combined with forward error correction (FEC) schemes are also explored over the noisy channel. The video will be encoded using HEVC, and the coded bitstream is channel coded to provide some redundancies. The channel coded bitstream is then modulated using QAM and transmitted over AWGN channel. At the receiver, the symbols are demodulated and channel decoded to obtain the video bitstream. The bitstream is then used to reconstruct the video using HEVC decoder. It is observed that as the signal to noise ratio of channel is decreased the quality of the reconstructed video decreases drastically. Using proper FEC codes, the quality of the video can be restored up to certain extent. Thus, the performance analysis of HEVC presented in this paper may assist in designing the optimized code rate of FEC such that the quality of the reconstructed video is maximized over wireless channels.Keywords: AWGN, forward error correction, HEVC, video coding, QAM
Procedia PDF Downloads 149255 Influence of Genotype, Explant, and Hormone Treatment on Agrobacterium-Transformation Success in Salix Callus Culture
Authors: Lukas J. Evans, Danilo D. Fernando
Abstract:
Shrub willows (Salix spp.) have many characteristics which make them suitable for a variety of applications such as riparian zone buffers, environmental contaminant sequestration, living snow fences, and biofuel production. In some cases, these functions are limited due to physical or financial obstacles associated with the number of individuals needed to reasonably satisfy that purpose. One way to increase the efficiency of willows is to bioengineer them with the genetic improvements suitable for the desired use. To accomplish this goal, an optimized in vitro transformation protocol via Agrobacterium tumefaciens is necessary to reliably express genes of interest. Therefore, the aim of this study is to observe the influence of tissue culture with different willow cultivars, hormones, and explants on the percentage of calli expressing reporter gene green florescent protein (GFP) to find ideal transformation conditions. Each callus was produced from 1 month old open-pollinated seedlings of three Salix miyabeana cultivars (‘SX61’, ‘WT1’, and ‘WT2’) from three different explants (lamina, petiole, and internodes). Explants were cultured for 1 month on an MS media with different concentrations of 6-Benzylaminopurine (BAP) and 1-Naphthaleneacetic acid (NAA) (No hormones, 1 mg⁻¹L BAP only, 3 mg⁻¹L NAA only, 1 mg⁻¹L BAP and 3 mg⁻¹L NAA, and 3 mg⁻¹L BAP and 1 mg⁻¹L NAA) to produce a callus. Samples were then treated with Agrobacterium tumefaciens at an OD600 of 0.6-0.8 to insert the transgene GFP for 30 minutes, co-cultivated for 72 hours, and selected on the same media type they were cultured on with added 7.5 mg⁻¹L of Hygromycin for 1 week before GFP visualization under a UV dissecting scope. Percentage of GFP expressing calli as well as the average number of fluorescing GFP units per callus were recorded and results were evaluated through an ANOVA test (α = 0.05). The WT1 internode-derived calli on media with 3 mg-1L NAA+1 mg⁻¹L BAP and mg⁻¹L BAP alone produced a significantly higher percentage of GFP expressing calli than each other group (19.1% and 19.4%, respectively). Additionally, The WT1 internode group cultured with 3 mg⁻¹L NAA+1 mg⁻¹L BAP produced an average of 2.89 GFP units per callus while the group cultivated with 1 mg⁻¹L BAP produced an average of 0.84 GFP units per callus. In conclusion, genotype, explant choice, and hormones all play a significant role in increasing successful transformation in willows. Future studies to produce whole callus GFP expression and subsequent plantlet regeneration are necessary for a complete willow transformation protocol.Keywords: agrobacterium, callus, Salix, tissue culture
Procedia PDF Downloads 124254 Assessing the Risk of Socio-economic Drought: A Case Study of Chuxiong Yi Autonomous Prefecture, China
Authors: Mengdan Guo, Zongmin Wang, Haibo Yang
Abstract:
Drought is one of the most complex and destructive natural disasters, with a huge impact on both nature and society. In recent years, adverse climate conditions and uncontrolled human activities have exacerbated the occurrence of global droughts, among which socio-economic droughts are closely related to human survival. The study of socio-economic drought risk assessment is crucial for sustainable social development. Therefore, this study comprehensively considered the risk of disaster causing factors, the exposure level of the disaster-prone environment, and the vulnerability of the disaster bearing body to construct a socio-economic drought risk assessment model for Chuxiong Prefecture in Yunnan Province. Firstly, a threedimensional frequency analysis of intensity area duration drought was conducted, followed by a statistical analysis of the drought risk of the socio-economic system. Secondly, a grid analysis model was constructed to assess the exposure levels of different agents and study the effects of drought on regional crop growth, industrial economic growth, and human consumption thresholds. Thirdly, an agricultural vulnerability model for different irrigation levels was established by using the DSSAT crop model. Industrial economic vulnerability and domestic water vulnerability under the impact of drought were investigated by constructing a standardized socio-economic drought index and coupling water loss. Finally, the socio-economic drought risk was assessed by combining hazard, exposure, and vulnerability. The results show that the frequency of drought occurrence in Chuxiong Prefecture, Yunnan Province is relatively high, with high population and economic exposure concentrated in urban areas of various counties and districts, and high agricultural exposure concentrated in mountainous and rural areas. Irrigation can effectively reduce agricultural vulnerability in Chuxiong, and the yield loss rate under the 20mm winter irrigation scenario decreased by 10.7% compared to the rain fed scenario. From the perspective of comprehensive risk, the distribution of long-term socio-economic drought risk in Chuxiong Prefecture is relatively consistent, with the more severe areas mainly concentrated in Chuxiong City and Lufeng County, followed by counties such as Yao'an, Mouding and Yuanmou. Shuangbai County has the lowest socio-economic drought risk, which is basically consistent with the economic distribution trend of Chuxiong Prefecture. And in June, July, and August, the drought risk in Chuxiong Prefecture is generally high. These results can provide constructive suggestions for the allocation of water resources and the construction of water conservancy facilities in Chuxiong Prefecture, and provide scientific basis for more effective drought prevention and control. Future research is in the areas of data quality and availability, climate change impacts, human activity impacts, and countermeasures for a more comprehensive understanding and effective response to drought risk in Chuxiong Prefecture.Keywords: DSSAT model, risk assessment, socio-economic drought, standardized socio-economic drought index
Procedia PDF Downloads 55253 Synthesis, Characterization and Photocatalytic Applications of Ag-Doped-SnO₂ Nanoparticles by Sol-Gel Method
Authors: M. S. Abd El-Sadek, M. A. Omar, Gharib M. Taha
Abstract:
In recent years, photocatalytic degradation of various kinds of organic and inorganic pollutants using semiconductor powders as photocatalysts has been extensively studied. Owing to its relatively high photocatalytic activity, biological and chemical stability, low cost, nonpoisonous and long stable life, Tin oxide materials have been widely used as catalysts in chemical reactions, including synthesis of vinyl ketone, oxidation of methanol and so on. Tin oxide (SnO₂), with a rutile-type crystalline structure, is an n-type wide band gap (3.6 eV) semiconductor that presents a proper combination of chemical, electronic and optical properties that make it advantageous in several applications. In the present work, SnO₂ nanoparticles were synthesized at room temperature by the sol-gel process and thermohydrolysis of SnCl₂ in isopropanol by controlling the crystallite size through calculations. The synthesized nanoparticles were identified by using XRD analysis, TEM, FT-IR, and Uv-Visible spectroscopic techniques. The crystalline structure and grain size of the synthesized samples were analyzed by X-Ray diffraction analysis (XRD) and the XRD patterns confirmed the presence of tetragonal phase SnO₂. In this study, Methylene blue degradation was tested by using SnO₂ nanoparticles (at different calculations temperatures) as a photocatalyst under sunlight as a source of irradiation. The results showed that the highest percentage of degradation of Methylene blue dye was obtained by using SnO₂ photocatalyst at calculations temperature 800 ᵒC. The operational parameters were investigated to be optimized to the best conditions which result in complete removal of organic pollutants from aqueous solution. It was found that the degradation of dyes depends on several parameters such as irradiation time, initial dye concentration, the dose of the catalyst and the presence of metals such as silver as a dopant and its concentration. Percent degradation was increased with irradiation time. The degradation efficiency decreased as the initial concentration of the dye increased. The degradation efficiency increased as the dose of the catalyst increased to a certain level and by further increasing the SnO₂ photocatalyst dose, the degradation efficiency is decreased. The best degradation efficiency on which obtained from pure SnO₂ compared with SnO₂ which doped by different percentage of Ag.Keywords: SnO₂ nanoparticles, a sol-gel method, photocatalytic applications, methylene blue, degradation efficiency
Procedia PDF Downloads 153252 Statistical Pattern Recognition for Biotechnological Process Characterization Based on High Resolution Mass Spectrometry
Authors: S. Fröhlich, M. Herold, M. Allmer
Abstract:
Early stage quantitative analysis of host cell protein (HCP) variations is challenging yet necessary for comprehensive bioprocess development. High resolution mass spectrometry (HRMS) provides a high-end technology for accurate identification alongside with quantitative information. Hereby we describe a flexible HRMS assay platform to quantify HCPs relevant in microbial expression systems such as E. Coli in both up and downstream development by means of MVDA tools. Cell pellets were lysed and proteins extracted, purified samples not further treated before applying the SMART tryptic digest kit. Peptides separation was optimized using an RP-UHPLC separation platform. HRMS-MSMS analysis was conducted on an Orbitrap Velos Elite applying CID. Quantification was performed label-free taking into account ionization properties and physicochemical peptide similarities. Results were analyzed using SIEVE 2.0 (Thermo Fisher Scientific) and SIMCA (Umetrics AG). The developed HRMS platform was applied to an E. Coli expression set with varying productivity and the corresponding downstream process. Selected HCPs were successfully quantified within the fmol range. Analysing HCP networks based on pattern analysis facilitated low level quantification and enhanced validity. This approach is of high relevance for high-throughput screening experiments during upstream development, e.g. for titer determination, dynamic HCP network analysis or product characterization. Considering the downstream purification process, physicochemical clustering of identified HCPs is of relevance to adjust buffer conditions accordingly. However, the technology provides an innovative approach for label-free MS based quantification relying on statistical pattern analysis and comparison. Absolute quantification based on physicochemical properties and peptide similarity score provides a technological approach without the need of sophisticated sample preparation strategies and is therefore proven to be straightforward, sensitive and highly reproducible in terms of product characterization.Keywords: process analytical technology, mass spectrometry, process characterization, MVDA, pattern recognition
Procedia PDF Downloads 252251 Mandate of Heaven and Serving the People in Chinese Political Rhetoric: An Evolving Discourse System across Three Thousand Years
Authors: Weixiao Wei, Chris Shei
Abstract:
This paper describes Mandate of Heaven as a source of justification for the ruling regime from ancient China approximately three thousand years ago. Initially, the kings of Shang dynasty simply nominated themselves as the sons of Heaven sent to Earth to rule the common people. As the last generation of the kings became corrupted and ruled withbrutal force and crueltywhich directly caused their destruction, the successive kings of Zhou dynasty realised the importance of virtue and the provision of goods to the people. Legitimacy of the ruling regimes became rested not entirely on random allocation of the throne by an unknown supernatural force but on a foundation comprising morality and the ability to provide goods. The latter composite was picked up by the current ruling regime, the Chinese Communist Party, and became the cornerstone of its political legitimacy, also known as ‘performance legitimacy’ where economic development accounts for the satisfaction of the people in place of election and other democratic means of providing legal-rational legitimacy. Under this circumstance, it becomes important as well for the ruling party to use political rhetoric to convince people of the good performance of the government in the economy, morality, and foreign policy. Thus, we see a lot of propaganda materials in both government policy statements and international press conference announcements. The former consists mainly of important speeches made by prominent figures in Party conferences which are not only made publicly available on the government websites but also become obligatory reading materials for university entrance examinations. The later consists of announcements about foreign policies and strategies and actions taken by the government regarding foreign affairsmade in international conferences and offered in Chinese-English bilingual versions on official websites. This documentation strategy creates an impressive image of the Chinese Communist Party that is domestically competent and international strong, taking care of the people it governs in terms of economic needs and defending the country against any foreign interference and global adversities. This political discourse system comprising reading materials fully extractable from government websites also becomes excellent repertoire for teaching and researching in contemporary Chinese language, discourse and rhetoric, Chinese culture and tradition, Chinese political ideology, and Chinese-English translation. This paper aims to provide a detailed and comprehensive description of the current Chinese political discourse system, arguing about its lineage from the rhetorical convention of Mandate of Heaven in ancient China and its current concentration on serving the people in place of election, human rights, and freedom of speech. The paper will also provide guidelines as to how this discourse system and the manifestation of official documents created under this system can become excellent research and teaching materials in applied linguistics.Keywords: mandate of heaven, Chinese communist party, performance legitimacy, serving the people, political discourse
Procedia PDF Downloads 110250 Selenuranes as Cysteine Protease Inhibitors: Theorical Investigation on Model Systems
Authors: Gabriela D. Silva, Rodrigo L. O. R. Cunha, Mauricio D. Coutinho-Neto
Abstract:
In the last four decades the biological activities of selenium compounds has received great attention, particularly for hypervalent derivates from selenium (IV) used as enzyme inhibitors. The unregulated activity of cysteine proteases are related to the development of several pathologies, such as neurological disorders, cardiovascular diseases, obesity, rheumatoid arthritis, cancer and parasitic infections. These enzymes are therefore a valuable target for designing new small molecule inhibitors such as selenuranes. Even tough there has been advances in the synthesis and design of new selenuranes based inhibitors, little is known about their mechanism of action. It is a given that inhibition occurs through the reaction between the thiol group of the enzyme and the chalcogen atom. However, several open questions remain about the nature of the mechanism (associative vs. dissociative) and about the nature of the reactive species in solution under physiological conditions. In this work we performed a theoretical investigation on model systems to study the possible routes of substitution reactions. Nucleophiles may be present in biological systems, our interest is centered in the thiol groups from the cysteine proteases and the hydroxyls from the aqueous environment. We therefore expect this study to clarify the possibility of a route reaction in two stages, the first consisting of the substitution of chloro atoms by hydroxyl groups and then replacing these hydroxyl groups per thiol groups in selenuranes. The structures of selenuranes and nucleophiles were optimized using density function theory along the B3LYP functional and a 6-311+G(d) basis set. Solvent was treated using the IEFPCM method as implemented in the Gaussian 09 code. Our results indicate that hydrolysis from water react preferably with selenuranes, and then, they are replaced by the thiol group. It show the energy values of -106,0730423 kcal/mol for dople substituition by hydroxyl group and 96,63078511 kcal/mol for thiol group. The solvatation and pH reduction promotes this route, increasing the energy value for reaction with hydroxil group to -50,75637672 kcal/mol and decreasing the energy value for thiol to 7,917767189 kcal/mol. Alternative ways were analyzed for monosubstitution (considering the competition between Cl, OH and SH groups) and they suggest the same route. Similar results were obtained for aliphatic and aromatic selenuranes studied.Keywords: chalcogenes, computational study, cysteine proteases, enzyme inhibitors
Procedia PDF Downloads 305249 The Strategic Importance of Technology in the International Production: Beyond the Global Value Chains Approach
Authors: Marcelo Pereira Introini
Abstract:
The global value chains (GVC) approach contributes to a better understanding of the international production organization amid globalization’s second unbundling from the 1970s on. Mainly due to the tools that help to understand the importance of critical competences, technological capabilities, and functions performed by each player, GVC research flourished in recent years, rooted in discussing the possibilities of integration and repositioning along regional and global value chains. Regarding this context, part of the literature endorsed a more optimistic view that engaging in fragmented production networks could represent learning opportunities for developing countries’ firms, since the relationship with transnational corporations could allow them build skills and competences. Increasing recognition that GVCs are based on asymmetric power relations provided another sight about benefits, costs, and development possibilities though. Once leading companies tend to restrict the replication of their technologies and capabilities by their suppliers, alternative strategies beyond the functional specialization, seen as a way to integrate value chains, began to be broadly highlighted. This paper organizes a coherent narrative about the shortcomings of the GVC analytical framework, while recognizing its multidimensional contributions and recent developments. We adopt two different and complementary perspectives to explore the idea of integration in the international production. On one hand, we emphasize obstacles beyond production components, analyzing the role played by intangible assets and intellectual property regimes. On the other hand, we consider the importance of domestic production and innovation systems for technological development. In order to provide a deeper understanding of the restrictions on technological learning of developing countries’ firms, we firstly build from the notion of intellectual monopoly to analyze how flagship companies can prevent subordinated firms from improving their positions in fragmented production networks. Based on intellectual property protection regimes we discuss the increasing asymmetries between these players and the decreasing access of part of them to strategic intangible assets. Second, we debate the role of productive-technological ecosystems and of interactive and systemic technological development processes, as concepts of the Innovation Systems approach. Supporting the idea that not only endogenous advantages are important for international competition of developing countries’ firms, but also that the building of these advantages itself can be a source of technological learning, we focus on local efforts as a crucial element, which is not replaceable for technology imported from abroad. Finally, the paper contributes to the discussion about technological development as a two-dimensional dynamic. If GVC analysis tends to underline a company-based perspective, stressing the learning opportunities associated to GVC integration, historical involvement of national States brings up the debate about technology as a central aspect of interstate disputes. In this sense, technology is seen as part of military modernization before being also used in civil contexts, what presupposes its role for national security and productive autonomy strategies. From this outlook, it is important to consider it as an asset that, incorporated in sophisticated machinery, can be the target of state policies besides the protection provided by intellectual property regimes, such as in export controls and inward-investment restrictions.Keywords: global value chains, innovation systems, intellectual monopoly, technological development
Procedia PDF Downloads 82248 Influence of Natural Rubber on the Frictional and Mechanical Behavior of the Composite Brake Pad Materials
Authors: H. Yanar, G. Purcek, H. H. Ayar
Abstract:
The ingredients of composite materials used for the production of composite brake pads play an important role in terms of safety braking performance of automobiles and trains. Therefore, the ingredients must be selected carefully and used in appropriate ratios in the matrix structure of the brake pad materials. In the present study, a non-asbestos organic composite brake pad materials containing binder resin, space fillers, solid lubricants, and friction modifier was developed, and its fillers content was optimized by adding natural rubber with different rate into the specified matrix structure in order to achieve the best combination of tribo-performance and mechanical properties. For this purpose, four compositions with different rubber content (2.5wt.%, 5.0wt.%, 7.5wt.% and 10wt.%) were prepared and then test samples with the diameter of 20 mm and length of 15 mm were produced to evaluate the friction and mechanical behaviors of the mixture. The friction and wear tests were performed using a pin-on-disc type test rig which was designed according to NF-F-11-292 French standard. All test samples were subjected to two different types of friction tests defined as periodic braking and continuous braking (also known as fade test). In this way, the coefficient of friction (CoF) of composite sample with different rubber content were determined as a function of number of braking cycle and temperature of the disc surface. The results demonstrated that addition of rubber into the matrix structure of the composite caused a significant change in the CoF. Average CoF of the composite samples increased linearly with increasing rubber content into the matrix. While the average CoF was 0.19 for the rubber-free composite, the composite sample containing 20wt.% rubber had the maximum CoF of about 0.24. Although the CoF of composite sample increased, the amount of specific wear rate decreased with increasing rubber content into the matrix. On the other hand, it was observed that the CoF decreased with increasing temperature generated in-between sample and disk depending on the increasing rubber content. While the CoF decreased to the minimum value of 0.15 at 400 °C for the rubber-free composite sample, the sample having the maximum rubber content of 10wt.% exhibited the lowest one of 0.09 at the same temperature. Addition of rubber into the matrix structure decreased the hardness and strength of the samples. It was concluded from the results that the composite matrix with 5 wt.% rubber had the best composition regarding the performance parameters such as required frictional and mechanical behavior. This composition has the average CoF of 0.21, specific wear rate of 0.024 cm³/MJ and hardness value of 63 HRX.Keywords: brake pad composite, friction and wear, rubber, friction materials
Procedia PDF Downloads 139247 Cleaning of Polycyclic Aromatic Hydrocarbons (PAH) Obtained from Ferroalloys Plant
Authors: Stefan Andersson, Balram Panjwani, Bernd Wittgens, Jan Erik Olsen
Abstract:
Polycyclic Aromatic hydrocarbons are organic compounds consisting of only hydrogen and carbon aromatic rings. PAH are neutral, non-polar molecules that are produced due to incomplete combustion of organic matter. These compounds are carcinogenic and interact with biological nucleophiles to inhibit the normal metabolic functions of the cells. Norways, the most important sources of PAH pollution is considered to be aluminum plants, the metallurgical industry, offshore oil activity, transport, and wood burning. Stricter governmental regulations regarding emissions to the outer and internal environment combined with increased awareness of the potential health effects have motivated Norwegian metal industries to increase their efforts to reduce emissions considerably. One of the objective of the ongoing industry and Norwegian research council supported "SCORE" project is to reduce potential PAH emissions from an off gas stream of a ferroalloy furnace through controlled combustion. In a dedicated combustion chamber. The sizing and configuration of the combustion chamber depends on the combined properties of the bulk gas stream and the properties of the PAH itself. In order to achieve efficient and complete combustion the residence time and minimum temperature need to be optimized. For this design approach reliable kinetic data of the individual PAH-species and/or groups thereof are necessary. However, kinetic data on the combustion of PAH are difficult to obtain and there is only a limited number of studies. The paper presents an evaluation of the kinetic data for some of the PAH obtained from literature. In the present study, the oxidation is modelled for pure PAH and also for PAH mixed with process gas. Using a perfectly stirred reactor modelling approach the oxidation is modelled including advanced reaction kinetics to study influence of residence time and temperature on the conversion of PAH to CO2 and water. A Chemical Reactor Network (CRN) approach is developed to understand the oxidation of PAH inside the combustion chamber. Chemical reactor network modeling has been found to be a valuable tool in the evaluation of oxidation behavior of PAH under various conditions.Keywords: PAH, PSR, energy recovery, ferro alloy furnace
Procedia PDF Downloads 273246 Assessment of Health Literacy and Awareness of Female Residents of Barangay Dagatan, Sabang, and Marauoy Lipa, Batangas on Polycystic Ovarian Syndrome: A Cross-Sectional Study
Authors: Jean Gray C. Achapero, Mary Margareth P. Ancheta, Patricia Anjelika A. Angeles, Shannon Denzel S. Ao Tai, Carl Brandon C. Barlis, Chrislen Mae B. Benavidez
Abstract:
Health literacy and awareness of Polycystic ovarian syndrome (PCOS) is a global issue that is under-addressed in the Philippines. Conducting a thorough review of the country's ability to recognize and comprehend the severity of the syndrome should be undertaken, as early treatment is essential to avoid further disorder complications. This research aims to assess the health literacy and awareness of the female residents of Barangay Dagatan, Sabang, and Marauoy Lipa, Batangas on PCOS. It followed a cross-sectional study, and data gathering was done through a pre-assessment using the Single Item Literacy Screener (SILS) and an online population-based survey questionnaire about PCOS awareness. The participants, as based on the objectives and purposive sampling method, were females aged 18-45 years old. Data were analyzed statistically using STATA 13.1 software. The study showed that 339 (76%) out of 444 respondents passed the SILS meaning the residents have proficient health literacy. Among the 339 respondents, 87% (287) had previous knowledge about PCOS. The respondents showed minimal awareness of PCOS symptoms which could be attributed to its broad spectrum of information. Respondents were shown to be most knowledgeable about PCOS physiology, treatment, beliefs, and its remedies. The respondents’ age had no significant association with their health literacy (p=0.31) and PCOS awareness (p=0.60). A significant association was noted, however, in their educational attainment linked with their health literacy (p=<0.0001) and PCOS awareness (p=0.001). It is suggested that reproductive health education even in the lower year levels must be optimized and Local Government Unit (LGU)/Non-Government Organization (NGO)-held seminars should be conducted for knowledge reinforcement. Reliable health information should be more accessible to the public and clinicians must emphasize the importance of the majority of early screening as part of routine physical examination for women of reproductive age to increase health literacy and awareness about PCOS and actively engage in the management of the disease.Keywords: age, awareness, educational attainment, health literacy, polycystic ovarian syndrome
Procedia PDF Downloads 230245 Determinants of Corporate Social Responsibility Adoption: Evidence from China
Authors: Jing (Claire) LI
Abstract:
More than two decades from 2000 to 2020 of economic reforms have brought China unprecedented economic growth. There is an urgent call of research towards corporate social responsibility (CSR) in the context of China because while China continues to develop into a global trading market, it suffers from various serious problems relating to CSR. This study analyses the factors affecting the adoption of CSR practices by Chinese listed companies. The author proposes a new framework of factors of CSR adoption. Following common organisational factors and external factors in the literature (including organisational support, company size, shareholder pressures, and government support), this study introduces two additional factors, dynamic capability and regional culture. A survey questionnaire was conducted on the CSR adoption of Chinese listed companies in Shen Zhen and Shang Hai index from December 2019 to March 2020. The survey was conducted to collect data on the factors that affect the adoption of CSR. After collection of data, this study performed factor analysis to reduce the number of measurement items to several main factors. This procedure is to confirm the proposed framework and ensure the significant factors. Through analysis, this study identifies four grouped factors as determinants of the CSR adoption. The first factor loading includes dynamic capability and organisational support. The study finds that they are positively related to the first factor, so the first factor mainly reflects the capabilities of companies, which is one component in internal factors. In the second factor, measurement items of stakeholder pressures mainly are from regulatory bodies, customer and supplier, employees and community, and shareholders. In sum, they are positively related to the second factor and they reflect stakeholder pressures, which is one component of external factors. The third factor reflects organisational characteristics. Variables include company size and cultural score. Among these variables, company size is negatively related to the third factor. The resulted factor loading of the third factor implies that organisational factor is an important determinant of CSR adoption. Cultural consistency, the variable in the fourth factor, is positively related to the factor. It represents the difference between perception of managers and actual culture of the organisations in terms of cultural dimensions, which is one component in internal factors. It implies that regional culture is an important factor of CSR adoption. Overall, the results are consistent with previous literature. This study is of significance from both theoretical and empirical perspectives. First, from the significance of theoretical perspective, this research combines stakeholder theory, dynamic capability view of a firm, and neo-institutional theory in CSR research. Based on association of these three theories, this study introduces two new factors (dynamic capability and regional culture) to have a better framework for CSR adoption. Second, this study contributes to empirical literature of CSR in the context of China. Extant Chinese companies lack recognition of the importance of CSR practices adoption. This study built a framework and may help companies to design resource allocation strategies and evaluate future CSR and management practices in an early stage.Keywords: China, corporate social responsibility, CSR adoption, dynamic capability, regional culture
Procedia PDF Downloads 136244 Knowledge of Quality Assurance and Quality Control in Mammography; A Study among Radiographers of Mammography Settings in Sri Lanka
Authors: H. S. Niroshani, W. M. Ediri Arachchi, R. Tudugala, U. J. M. A. L. Jayasinghe, U. M. U. J. Jayasekara, P. B. Hewavithana
Abstract:
Mammography is used as a screening tool for early diagnosis of breast cancer. It is also useful in refining the diagnosis of breast cancer either by assessment or work up after a suspicious area in the breast has been detected. In order to detect breast cancer accurately and at the earliest possible stage, the image must have an optimum contrast to reveal mass densities and spiculated fibrous structures radiating from them. In addition, the spatial resolution must be adequate to reveal the suffusion of micro calcifications and their shape. The above factors can be optimized by implementing an effective QA programme to enhance the accurate diagnosis of mammographic imaging. Therefore, the radiographer’s knowledge on QA is greatly instrumental in routine mammographic practice. The aim of this study was to assess the radiographer’s knowledge on Quality Assurance and Quality Control programmes in relation to mammographic procedures. A cross-sectional study was carried out among all radiographers working in each mammography setting in Sri Lanka. Pre-tested, anonymous self-administered questionnaires were circulated among the study population and duly filled questionnaires returned within a period of three months were taken into the account. The data on demographical information, knowledge on QA programme and associated QC tests, overall knowledge on QA and QC programmes were obtained. Data analysis was performed using IBM SPSS statistical software (version 20.0). The total response rate was 59.6% and the average knowledge score was 54.15±11.29 SD out of 100. Knowledge was compared on the basis of education level, special training of mammography, and the years of working experience in a mammographic setting of the individuals. Out of 31 subjects, 64.5% (n=20) were graduate radiographers and 35.5% (n=11) were diploma holders while 83.9% (n=26) of radiographers have been specially trained for mammography and 16.1% (n=5) have not been attended for any special training for mammography. It is also noted that 58.1% (n=18) of individuals possessed their experience of less than one year and rest 41.9% (n=13) of them were greater than that. Further, the results found that there is a significant difference (P < 0.05) in the knowledge of QA and overall knowledge on QA and QC programme in the categories of education level and working experience. Also, results imply that there was a significant difference (P < 0.05) in the knowledge of QC test among the groups of trained and non-trained radiographers. This study reveals that education level, working experience and the training obtained particularly in the field of mammography have a significant impact on their knowledge on QA and QC in mammography.Keywords: knowledge, mammography, quality assurance, quality control
Procedia PDF Downloads 332243 Product Separation of Green Processes and Catalyst Recycling of a Homogeneous Polyoxometalate Catalyst Using Nanofiltration Membranes
Authors: Dorothea Voß, Tobias Esser, Michael Huber, Jakob Albert
Abstract:
The growing world population and the associated increase in demand for energy and consumer goods, as well as increasing waste production, requires the development of sustainable processes. In addition, the increasing environmental awareness of our society is a driving force for the requirement that processes must be as resource and energy efficient as possible. In this context, the use of polyoxometalate catalysts (POMs) has emerged as a promising approach for the development of green processes. POMs are bifunctional polynuclear metal-oxo-anion cluster characterized by a strong Brønsted acidity, a high proton mobility combined with fast multi-electron transfer and tunable redox potential. In addition, POMs are soluble in many commonly known solvents and exhibit resistance to hydrolytic and oxidative degradation. Due to their structure and excellent physicochemical properties, POMs are efficient acid and oxidation catalysts that have attracted much attention in recent years. Oxidation processes with molecular oxygen are worth mentioning here. However, the fact that the POM catalysts are homogeneous poses a challenge for downstream processing of product solutions and recycling of the catalysts. In this regard, nanofiltration membranes have gained increasing interest in recent years, particularly due to their relative sustainability advantage over other technologies and their unique properties such as increased selectivity towards multivalent ions. In order to establish an efficient downstream process for the highly selective separation of homogeneous POM catalysts from aqueous solutions using nanofiltration membranes, a laboratory-scale membrane system was designed and constructed. By varying various process parameters, a sensitivity analysis was performed on a model system to develop an optimized method for the recovery of POM catalysts. From this, process-relevant key figures such as the rejection of various system components were derived. These results form the basis for further experiments on other systems to test the transferability to serval separation tasks with different POMs and products, as well as for recycling experiments of the catalysts in processes on laboratory scale.Keywords: downstream processing, nanofiltration, polyoxometalates, homogeneous catalysis, green chemistry
Procedia PDF Downloads 89242 Batch and Dynamic Investigations on Magnesium Separation by Ion Exchange Adsorption: Performance and Cost Evaluation
Authors: Mohamed H. Sorour, Hayam F. Shaalan, Heba A. Hani, Eman S. Sayed
Abstract:
Ion exchange adsorption has a long standing history of success for seawater softening and selective ion removal from saline sources. Strong, weak and mixed types ion exchange systems could be designed and optimized for target separation. In this paper, different types of adsorbents comprising zeolite 13X and kaolin, in addition to, poly acrylate/zeolite (AZ), poly acrylate/kaolin (AK) and stand-alone poly acrylate (A) hydrogel types were prepared via microwave (M) and ultrasonic (U) irradiation techniques. They were characterized using X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), and scanning electron microscopy (SEM). The developed adsorbents were evaluated on bench scale level and based on assessment results, a composite bed has been formulated for performance evaluation in pilot scale column investigations. Owing to the hydrogel nature of the partially crosslinked poly acrylate, the developed adsorbents manifested a swelling capacity of about 50 g/g. The pilot trials have been carried out using magnesium enriched Red Seawater to simulate Red Seawater desalination brine. Batch studies indicated varying uptake efficiencies, where Mg adsorption decreases according to the following prepared hydrogel types AU>AM>AKM>AKU>AZM>AZU, being 108, 107, 78, 69, 66 and 63 mg/g, respectively. Composite bed adsorbent tested in the up-flow mode column studies indicated good performance for Mg uptake. For an operating cycle of 12 h, the maximum uptake during the loading cycle approached 92.5-100 mg/g, which is comparable to the performance of some commercial resins. Different regenerants have been explored to maximize regeneration and minimize the quantity of regenerants including 15% NaCl, 0.1 M HCl and sodium carbonate. Best results were obtained by acidified sodium chloride solution. In conclusion, developed cation exchange adsorbents comprising clay or zeolite support indicated adequate performance for Mg recovery under saline environment. Column design operated at the up-flow mode (approaching expanded bed) is appropriate for such type of separation. Preliminary cost indicators for Mg recovery via ion exchange have been developed and analyzed.Keywords: batch and dynamic magnesium separation, seawater, polyacrylate hydrogel, cost evaluation
Procedia PDF Downloads 135241 Simulation of Maximum Power Point Tracking in a Photovoltaic System: A Circumstance Using Pulse Width Modulation Analysis
Authors: Asowata Osamede
Abstract:
Optimized gain in respect to output power of stand-alone photovoltaic (PV) systems is one of the major focus of PV in recent times. This is evident to its low carbon emission and efficiency. Power failure or outage from commercial providers in general does not promote development to the public and private sector, these basically limit the development of industries. The need for a well-structured PV system is of importance for an efficient and cost-effective monitoring system. The purpose of this paper is to validate the maximum power point of an off-grid PV system taking into consideration the most effective tilt and orientation angles for PV's in the southern hemisphere. This paper is based on analyzing the system using a solar charger with MPPT from a pulse width modulation (PWM) perspective. The power conditioning device chosen is a solar charger with MPPT. The practical setup consists of a PV panel that is set to an orientation angle of 0o north, with a corresponding tilt angle of 36 o, 26o and 16o. The load employed in this set-up are three Lead Acid Batteries (LAB). The percentage fully charged, charging and not charging conditions are observed for all three batteries. The results obtained in this research is used to draw the conclusion that would provide a benchmark for researchers and scientist worldwide. This is done so as to have an idea of the best tilt and orientation angles for maximum power point in a basic off-grid PV system. A quantitative analysis would be employed in this research. Quantitative research tends to focus on measurement and proof. Inferential statistics are frequently used to generalize what is found about the study sample to the population as a whole. This would involve: selecting and defining the research question, deciding on a study type, deciding on the data collection tools, selecting the sample and its size, analyzing, interpreting and validating findings Preliminary results which include regression analysis (normal probability plot and residual plot using polynomial 6) showed the maximum power point in the system. The best tilt angle for maximum power point tracking proves that the 36o tilt angle provided the best average on time which in turns put the system into a pulse width modulation stage.Keywords: power-conversion, meteonorm, PV panels, DC-DC converters
Procedia PDF Downloads 149240 Optimizing Data Transfer and Processing in Multi-Cloud Environments for Big Data Workloads
Authors: Gaurav Kumar Sinha
Abstract:
In an era defined by the proliferation of data and the utilization of cloud computing environments, the efficient transfer and processing of big data workloads across multi-cloud platforms have emerged as critical challenges. This research paper embarks on a comprehensive exploration of the complexities associated with managing and optimizing big data in a multi-cloud ecosystem.The foundation of this study is rooted in the recognition that modern enterprises increasingly rely on multiple cloud providers to meet diverse business needs, enhance redundancy, and reduce vendor lock-in. As a consequence, managing data across these heterogeneous cloud environments has become intricate, necessitating innovative approaches to ensure data integrity, security, and performance.The primary objective of this research is to investigate strategies and techniques for enhancing the efficiency of data transfer and processing in multi-cloud scenarios. It recognizes that big data workloads are characterized by their sheer volume, variety, velocity, and complexity, making traditional data management solutions insufficient for harnessing the full potential of multi-cloud architectures.The study commences by elucidating the challenges posed by multi-cloud environments in the context of big data. These challenges encompass data fragmentation, latency, security concerns, and cost optimization. To address these challenges, the research explores a range of methodologies and solutions. One of the key areas of focus is data transfer optimization. The paper delves into techniques for minimizing data movement latency, optimizing bandwidth utilization, and ensuring secure data transmission between different cloud providers. It evaluates the applicability of dedicated data transfer protocols, intelligent data routing algorithms, and edge computing approaches in reducing transfer times.Furthermore, the study examines strategies for efficient data processing across multi-cloud environments. It acknowledges that big data processing requires distributed and parallel computing capabilities that span across cloud boundaries. The research investigates containerization and orchestration technologies, serverless computing models, and interoperability standards that facilitate seamless data processing workflows.Security and data governance are paramount concerns in multi-cloud environments. The paper explores methods for ensuring data security, access control, and compliance with regulatory frameworks. It considers encryption techniques, identity and access management, and auditing mechanisms as essential components of a robust multi-cloud data security strategy.The research also evaluates cost optimization strategies, recognizing that the dynamic nature of multi-cloud pricing models can impact the overall cost of data transfer and processing. It examines approaches for workload placement, resource allocation, and predictive cost modeling to minimize operational expenses while maximizing performance.Moreover, this study provides insights into real-world case studies and best practices adopted by organizations that have successfully navigated the challenges of multi-cloud big data management. It presents a comparative analysis of various multi-cloud management platforms and tools available in the market.Keywords: multi-cloud environments, big data workloads, data transfer optimization, data processing strategies
Procedia PDF Downloads 69239 Modeling the Demand for the Healthcare Services Using Data Analysis Techniques
Authors: Elizaveta S. Prokofyeva, Svetlana V. Maltseva, Roman D. Zaitsev
Abstract:
Rapidly evolving modern data analysis technologies in healthcare play a large role in understanding the operation of the system and its characteristics. Nowadays, one of the key tasks in urban healthcare is to optimize the resource allocation. Thus, the application of data analysis in medical institutions to solve optimization problems determines the significance of this study. The purpose of this research was to establish the dependence between the indicators of the effectiveness of the medical institution and its resources. Hospital discharges by diagnosis; hospital days of in-patients and in-patient average length of stay were selected as the performance indicators and the demand of the medical facility. The hospital beds by type of care, medical technology (magnetic resonance tomography, gamma cameras, angiographic complexes and lithotripters) and physicians characterized the resource provision of medical institutions for the developed models. The data source for the research was an open database of the statistical service Eurostat. The choice of the source is due to the fact that the databases contain complete and open information necessary for research tasks in the field of public health. In addition, the statistical database has a user-friendly interface that allows you to quickly build analytical reports. The study provides information on 28 European for the period from 2007 to 2016. For all countries included in the study, with the most accurate and complete data for the period under review, predictive models were developed based on historical panel data. An attempt to improve the quality and the interpretation of the models was made by cluster analysis of the investigated set of countries. The main idea was to assess the similarity of the joint behavior of the variables throughout the time period under consideration to identify groups of similar countries and to construct the separate regression models for them. Therefore, the original time series were used as the objects of clustering. The hierarchical agglomerate algorithm k-medoids was used. The sampled objects were used as the centers of the clusters obtained, since determining the centroid when working with time series involves additional difficulties. The number of clusters used the silhouette coefficient. After the cluster analysis it was possible to significantly improve the predictive power of the models: for example, in the one of the clusters, MAPE error was only 0,82%, which makes it possible to conclude that this forecast is highly reliable in the short term. The obtained predicted values of the developed models have a relatively low level of error and can be used to make decisions on the resource provision of the hospital by medical personnel. The research displays the strong dependencies between the demand for the medical services and the modern medical equipment variable, which highlights the importance of the technological component for the successful development of the medical facility. Currently, data analysis has a huge potential, which allows to significantly improving health services. Medical institutions that are the first to introduce these technologies will certainly have a competitive advantage.Keywords: data analysis, demand modeling, healthcare, medical facilities
Procedia PDF Downloads 145238 Exploiting the Potential of Fabric Phase Sorptive Extraction for Forensic Food Safety: Analysis of Food Samples in Cases of Drug Facilitated Crimes
Authors: Bharti Jain, Rajeev Jain, Abuzar Kabir, Torki Zughaibi, Shweta Sharma
Abstract:
Drug-facilitated crimes (DFCs) entail the use of a single drug or a mixture of drugs to render a victim unable. Traditionally, biological samples have been gathered from victims and conducted analysis to establish evidence of drug administration. Nevertheless, the rapid metabolism of various drugs and delays in analysis can impede the identification of such substances. For this, the present article describes a rapid, sustainable, highly efficient and miniaturized protocol for the identification and quantification of three sedative-hypnotic drugs, namely diazepam, chlordiazepoxide and ketamine in alcoholic beverages and complex food samples (cream of biscuit, flavored milk, juice, cake, tea, sweets and chocolate). The methodology involves utilizing fabric phase sorptive extraction (FPSE) to extract diazepam (DZ), chlordiazepoxide (CDP), and ketamine (KET). Subsequently, the extracted samples are subjected to analysis using gas chromatography-mass spectrometry (GC-MS). Several parameters, including the type of membrane, pH, agitation time and speed, ionic strength, sample volume, elution volume and time, and type of elution solvent, were screened and thoroughly optimized. Sol-gel Carbowax 20M (CW-20M) has demonstrated the most effective extraction efficiency for the target analytes among all evaluated membranes. Under optimal conditions, the method displayed linearity within the range of 0.3–10 µg mL–¹ (or µg g–¹), exhibiting a coefficient of determination (R2) ranging from 0.996–0.999. The limits of detection (LODs) and limits of quantification (LOQs) for liquid samples range between 0.020-0.069 µg mL-¹ and 0.066-0.22 µg mL-¹, respectively. Correspondingly, the LODs for solid samples ranged from 0.056-0.090 µg g-¹, while the LOQs ranged from 0.18-0.29 µg g-¹. Notably, the method showcased better precision, with repeatability and reproducibility both below 5% and 10%, respectively. Furthermore, the FPSE-GC-MS method proved effective in determining diazepam (DZ) in forensic food samples connected to drug-facilitated crimes (DFCs). Additionally, the proposed method underwent evaluation for its whiteness using the RGB12 algorithm.Keywords: drug facilitated crime, fabric phase sorptive extraction, food forensics, white analytical chemistry
Procedia PDF Downloads 71237 Analysis and Optimized Design of a Packaged Liquid Chiller
Authors: Saeed Farivar, Mohsen Kahrom
Abstract:
The purpose of this work is to develop a physical simulation model for the purpose of studying the effect of various design parameters on the performance of packaged-liquid chillers. This paper presents a steady-state model for predicting the performance of package-Liquid chiller over a wide range of operation condition. The model inputs are inlet conditions; geometry and output of model include system performance variable such as power consumption, coefficient of performance (COP) and states of refrigerant through the refrigeration cycle. A computer model that simulates the steady-state cyclic performance of a vapor compression chiller is developed for the purpose of performing detailed physical design analysis of actual industrial chillers. The model can be used for optimizing design and for detailed energy efficiency analysis of packaged liquid chillers. The simulation model takes into account presence of all chiller components such as compressor, shell-and-tube condenser and evaporator heat exchangers, thermostatic expansion valve and connection pipes and tubing’s by thermo-hydraulic modeling of heat transfer, fluids flow and thermodynamics processes in each one of the mentioned components. To verify the validity of the developed model, a 7.5 USRT packaged-liquid chiller is used and a laboratory test stand for bringing the chiller to its standard steady-state performance condition is build. Experimental results obtained from testing the chiller in various load and temperature conditions is shown to be in good agreement with those obtained from simulating the performance of the chiller using the computer prediction model. An entropy-minimization-based optimization analysis is performed based on the developed analytical performance model of the chiller. The variation of design parameters in construction of shell-and-tube condenser and evaporator heat exchangers are studied using the developed performance and optimization analysis and simulation model and a best-match condition between the physical design and construction of chiller heat exchangers and its compressor is found to exist. It is expected that manufacturers of chillers and research organizations interested in developing energy-efficient design and analysis of compression chillers can take advantage of the presented study and its results.Keywords: optimization, packaged liquid chiller, performance, simulation
Procedia PDF Downloads 278236 Application of Compressed Sensing and Different Sampling Trajectories for Data Reduction of Small Animal Magnetic Resonance Image
Authors: Matheus Madureira Matos, Alexandre Rodrigues Farias
Abstract:
Magnetic Resonance Imaging (MRI) is a vital imaging technique used in both clinical and pre-clinical areas to obtain detailed anatomical and functional information. However, MRI scans can be expensive, time-consuming, and often require the use of anesthetics to keep animals still during the imaging process. Anesthetics are commonly administered to animals undergoing MRI scans to ensure they remain still during the imaging process. However, prolonged or repeated exposure to anesthetics can have adverse effects on animals, including physiological alterations and potential toxicity. Minimizing the duration and frequency of anesthesia is, therefore, crucial for the well-being of research animals. In recent years, various sampling trajectories have been investigated to reduce the number of MRI measurements leading to shorter scanning time and minimizing the duration of animal exposure to the effects of anesthetics. Compressed sensing (CS) and sampling trajectories, such as cartesian, spiral, and radial, have emerged as powerful tools to reduce MRI data while preserving diagnostic quality. This work aims to apply CS and cartesian, spiral, and radial sampling trajectories for the reconstruction of MRI of the abdomen of mice sub-sampled at levels below that defined by the Nyquist theorem. The methodology of this work consists of using a fully sampled reference MRI of a female model C57B1/6 mouse acquired experimentally in a 4.7 Tesla MRI scanner for small animals using Spin Echo pulse sequences. The image is down-sampled by cartesian, radial, and spiral sampling paths and then reconstructed by CS. The quality of the reconstructed images is objectively assessed by three quality assessment techniques RMSE (Root mean square error), PSNR (Peak to Signal Noise Ratio), and SSIM (Structural similarity index measure). The utilization of optimized sampling trajectories and CS technique has demonstrated the potential for a significant reduction of up to 70% of image data acquisition. This result translates into shorter scan times, minimizing the duration and frequency of anesthesia administration and reducing the potential risks associated with it.Keywords: compressed sensing, magnetic resonance, sampling trajectories, small animals
Procedia PDF Downloads 75235 An Efficient Process Analysis and Control Method for Tire Mixing Operation
Authors: Hwang Ho Kim, Do Gyun Kim, Jin Young Choi, Sang Chul Park
Abstract:
Since tire production process is very complicated, company-wide management of it is very difficult, necessitating considerable amounts of capital and labors. Thus, productivity should be enhanced and maintained competitive by developing and applying effective production plans. Among major processes for tire manufacturing, consisting of mixing component preparation, building and curing, the mixing process is an essential and important step because the main component of tire, called compound, is formed at this step. Compound as a rubber synthesis with various characteristics plays its own role required for a tire as a finished product. Meanwhile, scheduling tire mixing process is similar to flexible job shop scheduling problem (FJSSP) because various kinds of compounds have their unique orders of operations, and a set of alternative machines can be used to process each operation. In addition, setup time required for different operations may differ due to alteration of additives. In other words, each operation of mixing processes requires different setup time depending on the previous one, and this kind of feature, called sequence dependent setup time (SDST), is a very important issue in traditional scheduling problems such as flexible job shop scheduling problems. However, despite of its importance, there exist few research works dealing with the tire mixing process. Thus, in this paper, we consider the scheduling problem for tire mixing process and suggest an efficient particle swarm optimization (PSO) algorithm to minimize the makespan for completing all the required jobs belonging to the process. Specifically, we design a particle encoding scheme for the considered scheduling problem, including a processing sequence for compounds and machine allocation information for each job operation, and a method for generating a tire mixing schedule from a given particle. At each iteration, the coordination and velocity of particles are updated, and the current solution is compared with new solution. This procedure is repeated until a stopping condition is satisfied. The performance of the proposed algorithm is validated through a numerical experiment by using some small-sized problem instances expressing the tire mixing process. Furthermore, we compare the solution of the proposed algorithm with it obtained by solving a mixed integer linear programming (MILP) model developed in previous research work. As for performance measure, we define an error rate which can evaluate the difference between two solutions. As a result, we show that PSO algorithm proposed in this paper outperforms MILP model with respect to the effectiveness and efficiency. As the direction for future work, we plan to consider scheduling problems in other processes such as building, curing. We can also extend our current work by considering other performance measures such as weighted makespan or processing times affected by aging or learning effects.Keywords: compound, error rate, flexible job shop scheduling problem, makespan, particle encoding scheme, particle swarm optimization, sequence dependent setup time, tire mixing process
Procedia PDF Downloads 266234 Seismic Retrofit of Tall Building Structure with Viscous, Visco-Elastic, Visco-Plastic Damper
Authors: Nicolas Bae, Theodore L. Karavasilis
Abstract:
Increasingly, a large number of new and existing tall buildings are required to improve their resilient performance against strong winds and earthquakes to minimize direct, as well as indirect damages to society. Those advent stationary functions of tall building structures in metropolitan regions can be severely hazardous, in socio-economic terms, which also increase the requirement of advanced seismic performance. To achieve these progressive requirements, the seismic reinforcement for some old, conventional buildings have become enormously costly. The methods of increasing the buildings’ resilience against wind or earthquake loads have also become more advanced. Up to now, vibration control devices, such as the passive damper system, is still regarded as an effective and an easy-to-install option, in improving the seismic resilience of buildings at affordable prices. The main purpose of this paper is to examine 1) the optimization of the shape of visco plastic brace damper (VPBD) system which is one of hybrid damper system so that it can maximize its energy dissipation capacity in tall buildings against wind and earthquake. 2) the verification of the seismic performance of the visco plastic brace damper system in tall buildings; up to forty-storey high steel frame buildings, by comparing the results of Non-Linear Response History Analysis (NLRHA), with and without a damper system. The most significant contribution of this research is to introduce the optimized hybrid damper system that is adequate for high rise buildings. The efficiency of this visco plastic brace damper system and the advantages of its use in tall buildings can be verified since tall buildings tend to be affected by wind load at its normal state and also by earthquake load after yielding of steel plates. The modeling of the prototype tall building will be conducted using the Opensees software. Three types of modeling were used to verify the performance of the damper (MRF, MRF with visco-elastic, MRF with visco-plastic model) 22-set seismic records used and the scaling procedure was followed according to the FEMA code. It is shown that MRF with viscous, visco-elastic damper, it is superior effective to reduce inelastic deformation such as roof displacement, maximum story drift, roof velocity compared to the MRF only.Keywords: tall steel building, seismic retrofit, viscous, viscoelastic damper, performance based design, resilience based design
Procedia PDF Downloads 193233 Knowledge and Attitude Towards Strabismus Among Adult Residents in Woreta Town, Northwest Ethiopia: A Community-Based Study
Authors: Henok Biruk Alemayehu, Kalkidan Berhane Tsegaye, Fozia Seid Ali, Nebiyat Feleke Adimassu, Getasew Alemu Mersha
Abstract:
Background: Strabismus is a visual disorder where the eyes are misaligned and point in different directions. Untreated strabismus can lead to amblyopia, loss of binocular vision, and social stigma due to its appearance. Since it is assumed that knowledge is pertinent for early screening and prevention of strabismus, the main objective of this study was to assess knowledge and attitudes toward strabismus in Woreta town, Northwest Ethiopia. Providing data in this area is important for planning health policies. Methods: A community-based cross-sectional study was done in Woreta town from April–May 2020. The sample size was determined using a single population proportion formula by taking a 50% proportion of good knowledge, 95% confidence level, 5% margin of errors, and 10% non- response rate. Accordingly, the final computed sample size was 424. All four kebeles were included in the study. There were 42,595 people in total, with 39,684 adults and 9229 house holds. A sample fraction ’’k’’ was obtained by dividing the number of the household by the calculated sample size of 424. Systematic random sampling with proportional allocation was used to select the participating households with a sampling fraction (K) of 21 i.e. each household was approached in every 21 households included in the study. One individual was selected ran- domly from each household with more than one adult, using the lottery method to obtain a final sample size. The data was collected through a face-to-face interview with a pretested and semi-structured questionnaire which was translated from English to Amharic and back to English to maintain its consistency. Data were entered using epi-data version 3.1, then processed and analyzed via SPSS version- 20. Descriptive and analytical statistics were employed to summarize the data. A p-value of less than 0.05 was used to declare statistical significance. Result: A total of 401 individuals aged over 18 years participated, with a response rate of 94.5%. Of those who responded, 56.6% were males. Of all the participants, 36.9% were illiterate. The proportion of people with poor knowledge of strabismus was 45.1%. It was shown that 53.9% of the respondents had a favorable attitude. Older age, higher educational level, having a history of eye examination, and a having a family history of strabismus were significantly associated with good knowledge of strabismus. A higher educational level, older age, and hearing about strabismus were significantly associated with a favorable attitude toward strabismus. Conclusion and recommendation: The proportion of good knowledge and favorable attitude towards strabismus were lower than previously reported in Gondar City, Northwest Ethiopia. There is a need to provide health education and promotion campaigns on strabismus to the community: what strabismus is, its’ possible treatments and the need to bring children to the eye care center for early diagnosis and treatment. it advocate for prospective research endeavors to employ qualitative study design.Additionally, it suggest the exploration of studies that investigate causal-effect relationship.Keywords: strabismus, knowledge, attitude, Woreta
Procedia PDF Downloads 63