Search results for: speed limit in london
472 An Overview of the Wind and Wave Climate in the Romanian Nearshore
Authors: Liliana Rusu
Abstract:
The goal of the proposed work is to provide a more comprehensive picture of the wind and wave climate in the Romanian nearshore, using the results provided by numerical models. The Romanian coastal environment is located in the western side of the Black Sea, the more energetic part of the sea, an area with heavy maritime traffic and various offshore operations. Information about the wind and wave climate in the Romanian waters is mainly based on observations at Gloria drilling platform (70 km from the coast). As regards the waves, the measurements of the wave characteristics are not so accurate due to the method used, being also available for a limited period. For this reason, the wave simulations that cover large temporal and spatial scales represent an option to describe better the wave climate. To assess the wind climate in the target area spanning 1992–2016, data provided by the NCEP-CFSR (U.S. National Centers for Environmental Prediction - Climate Forecast System Reanalysis) and consisting in wind fields at 10m above the sea level are used. The high spatial and temporal resolution of the wind fields is good enough to represent the wind variability over the area. For the same 25-year period, as considered for the wind climate, this study characterizes the wave climate from a wave hindcast data set that uses NCEP-CFSR winds as input for a model system SWAN (Simulating WAves Nearshore) based. The wave simulation results with a two-level modelling scale have been validated against both in situ measurements and remotely sensed data. The second level of the system, with a higher resolution in the geographical space (0.02°×0.02°), is focused on the Romanian coastal environment. The main wave parameters simulated at this level are used to analyse the wave climate. The spatial distributions of the wind speed, wind direction and the mean significant wave height have been computed as the average of the total data. As resulted from the amount of data, the target area presents a generally moderate wave climate that is affected by the storm events developed in the Black Sea basin. Both wind and wave climate presents high seasonal variability. All the results are computed as maps that help to find the more dangerous areas. A local analysis has been also employed in some key locations corresponding to highly sensitive areas, as for example the main Romanian harbors.Keywords: numerical simulations, Romanian nearshore, waves, wind
Procedia PDF Downloads 344471 Performance Tests of Wood Glues on Different Wood Species Used in Wood Workshops: Morogoro Tanzania
Authors: Japhet N. Mwambusi
Abstract:
High tropical forests deforestation for solid wood furniture industry is among of climate change contributing agents. This pressure indirectly is caused by furniture joints failure due to poor gluing technology based on improper use of different glues to different wood species which lead to low quality and weak wood-glue joints. This study was carried in order to run performance tests of wood glues on different wood species used in wood workshops: Morogoro Tanzania whereby three popular wood species of C. lusitanica, T. glandis and E. maidenii were tested against five glues of Woodfix, Bullbond, Ponal, Fevicol and Coral found in the market. The findings were necessary on developing a guideline for proper glue selection for a particular wood species joining. Random sampling was employed to interview carpenters while conducting a survey on the background of carpenters like their education level and to determine factors that influence their glues choice. Monsanto Tensiometer was used to determine bonding strength of identified wood glues to different wood species in use under British Standard of testing wood shear strength (BS EN 205) procedures. Data obtained from interviewing carpenters were analyzed through Statistical Package of Social Science software (SPSS) to allow the comparison of different data while laboratory data were compiled, related and compared by the use of MS Excel worksheet software as well as Analysis of Variance (ANOVA). Results revealed that among all five wood glues tested in the laboratory to three different wood species, Coral performed much better with the average shear strength 4.18 N/mm2, 3.23 N/mm2 and 5.42 N/mm2 for Cypress, Teak and Eucalyptus respectively. This displays that for a strong joint to be formed to all tree wood species for soft wood and hard wood, Coral has a first priority in use. The developed table of guideline from this research can be useful to carpenters on proper glue selection to a particular wood species so as to meet glue-bond strength. This will secure furniture market as well as reduce pressure to the forests for furniture production because of the strong existing furniture due to their strong joints. Indeed, this can be a good strategy on reducing climate change speed in tropics which result from high deforestation of trees for furniture production.Keywords: climate change, deforestation, gluing technology, joint failure, wood-glue, wood species
Procedia PDF Downloads 240470 Sensing Endocrine Disrupting Chemicals by Virus-Based Structural Colour Nanostructure
Authors: Lee Yujin, Han Jiye, Oh Jin-Woo
Abstract:
The adverse effects of endocrine disrupting chemicals (EDCs) has attracted considerable public interests. The benzene-like EDCs structure mimics the mechanisms of hormones naturally occurring in vivo, and alters physiological function of the endocrine system. Although, some of the most representative EDCs such as polychlorinated biphenyls (PCBs) and phthalates compounds already have been prohibited to produce and use in many countries, however, PCBs and phthalates in plastic products as flame retardant and plasticizer are still circulated nowadays. EDCs can be released from products while using and discarding, and it causes serious environmental and health issues. Here, we developed virus-based structurally coloured nanostructure that can detect minute EDCs concentration sensitively and selectively. These structurally coloured nanostructure exhibits characteristic angel-independent colors due to the regular virus bundle structure formation through simple pulling technique. The designed number of different colour bands can be formed through controlling concentration of virus solution and pulling speed. The virus, M-13 bacteriophage, was genetically engineered to react with specific ECDs, typically PCBs and phthalates. M-13 bacteriophage surface (pVIII major coat protein) was decorated with benzene derivative binding peptides (WHW) through phage library method. In the initial assessment, virus-based color sensor was exposed to several organic chemicals including benzene, toluene, phenol, chlorobenzene, and phthalic anhydride. Along with the selectivity evaluation of virus-based colour sensor, it also been tested for sensitivity. 10 to 300 ppm of phthalic anhydride and chlorobenzene were detected by colour sensor, and showed the significant sensitivity with about 90 of dissociation constant. Noteworthy, all measurements were analyzed through principal component analysis (PCA) and linear discrimination analysis (LDA), and exhibited clear discrimination ability upon exposure to 2 categories of EDCs (PCBs and phthalates). Because of its easy fabrication, high sensitivity, and the superior selectivity, M-13 bacteriophage-based color sensor could be a simple and reliable portable sensing system for environmental monitoring, healthcare, social security, and so on.Keywords: M-13 bacteriophage, colour sensor, genetic engineering, EDCs
Procedia PDF Downloads 242469 Verification Protocols for the Lightning Protection of a Large Scale Scientific Instrument in Harsh Environments: A Case Study
Authors: Clara Oliver, Oibar Martinez, Jose Miguel Miranda
Abstract:
This paper is devoted to the study of the most suitable protocols to verify the lightning protection and ground resistance quality in a large-scale scientific facility located in a harsh environment. We illustrate this work by reviewing a case study: the largest telescopes of the Northern Hemisphere Cherenkov Telescope Array, CTA-N. This array hosts sensitive and high-speed optoelectronics instrumentation and sits on a clear, free from obstacle terrain at around 2400 m above sea level. The site offers a top-quality sky but also features challenging conditions for a lightning protection system: the terrain is volcanic and has resistivities well above 1 kOhm·m. In addition, the environment often exhibits humidities well below 5%. On the other hand, the high complexity of a Cherenkov telescope structure does not allow a straightforward application of lightning protection standards. CTA-N has been conceived as an array of fourteen Cherenkov Telescopes of two different sizes, which will be constructed in La Palma Island, Spain. Cherenkov Telescopes can provide valuable information on different astrophysical sources from the gamma rays reaching the Earth’s atmosphere. The largest telescopes of CTA are called LST’s, and the construction of the first one was finished in October 2018. The LST has a shape which resembles a large parabolic antenna, with a 23-meter reflective surface supported by a tubular structure made of carbon fibers and steel tubes. The reflective surface has 400 square meters and is made of an array of segmented mirrors that can be controlled individually by a subsystem of actuators. This surface collects and focuses the Cherenkov photons into the camera, where 1855 photo-sensors convert the light in electrical signals that can be processed by dedicated electronics. We describe here how the risk assessment of direct strike impacts was made and how down conductors and ground system were both tested. The verification protocols which should be applied for the commissioning and operation phases are then explained. We stress our attention on the ground resistance quality assessment.Keywords: grounding, large scale scientific instrument, lightning risk assessment, lightning standards and safety
Procedia PDF Downloads 123468 Desulphurization of Waste Tire Pyrolytic Oil (TPO) Using Photodegradation and Adsorption Techniques
Authors: Moshe Mello, Hilary Rutto, Tumisang Seodigeng
Abstract:
The nature of tires makes them extremely challenging to recycle due to the available chemically cross-linked polymer and, therefore, they are neither fusible nor soluble and, consequently, cannot be remolded into other shapes without serious degradation. Open dumping of tires pollutes the soil, contaminates underground water and provides ideal breeding grounds for disease carrying vermins. The thermal decomposition of tires by pyrolysis produce char, gases and oil. The composition of oils derived from waste tires has common properties to commercial diesel fuel. The problem associated with the light oil derived from pyrolysis of waste tires is that it has a high sulfur content (> 1.0 wt.%) and therefore emits harmful sulfur oxide (SOx) gases to the atmosphere when combusted in diesel engines. Desulphurization of TPO is necessary due to the increasing stringent environmental regulations worldwide. Hydrodesulphurization (HDS) is the commonly practiced technique for the removal of sulfur species in liquid hydrocarbons. However, the HDS technique fails in the presence of complex sulfur species such as Dibenzothiopene (DBT) present in TPO. This study aims to investigate the viability of photodegradation (Photocatalytic oxidative desulphurization) and adsorptive desulphurization technologies for efficient removal of complex and non-complex sulfur species in TPO. This study focuses on optimizing the cleaning (removal of impurities and asphaltenes) process by varying process parameters; temperature, stirring speed, acid/oil ratio and time. The treated TPO will then be sent for vacuum distillation to attain the desired diesel like fuel. The effect of temperature, pressure and time will be determined for vacuum distillation of both raw TPO and the acid treated oil for comparison purposes. Polycyclic sulfides present in the distilled (diesel like) light oil will be oxidized dominantly to the corresponding sulfoxides and sulfone via a photo-catalyzed system using TiO2 as a catalyst and hydrogen peroxide as an oxidizing agent and finally acetonitrile will be used as an extraction solvent. Adsorptive desulphurization will be used to adsorb traces of sulfurous compounds which remained during photocatalytic desulphurization step. This desulphurization convoy is expected to give high desulphurization efficiency with reasonable oil recovery.Keywords: adsorption, asphaltenes, photocatalytic oxidation, pyrolysis
Procedia PDF Downloads 272467 Regulating Transnational Corporations and Protecting Human Rights: Analyzing the Efficiency of International Legal Framework
Authors: Stellina Jolly
Abstract:
July 18th to August 19th 2013 has gone down in the history of India for undertaking the country’s first environment referendum. The Supreme Court had ruled that the Vedanta Group's bauxite mining project in the Niyamgiri Hills of Orissa will have to get clearance from the gram sabha, which will consider the cultural and religious rights of the tribals and forest dwellers living in Rayagada and Kalahandi districts. In the Niyamgiri hills, people of small tribal hamlets were asked to voice their opinion on bauxite mining in their habitat. The ministry has reiterated its stand that mining cannot be allowed on the Niyamgiri hills because it will affect the rights of the Dongria Kondhs. The tribal person who occupies the Niyamgiri Hills in Eastern India accomplished their first success in 2010 in their struggle to protect and preserve their existence, culture and land against Vedanta a London-based mining giant. In August, 2010 Government of India revoked permission for Vedanta Resources to mine bauxite from hills in Orissa State where the Dongria Kondh live as forest dwellers. This came after various protests and reports including amnesty report wherein it highlighted that an alumina refinery in eastern India operated by a subsidiary of mining company. Vedanta was accused of causing air and water pollution that threatens the health of local people and their access to water. The abuse of human rights by corporate is not a new issue it has occurred in Africa, Asia and other parts of the world. Paper focuses on the instances and extent of human right especially in terms of environment violations by corporations. Further Paper details on corporations and sustainable development. Paper finally comes up with certain recommendation including call for a declaration by United Nations on Corporate environment Human Rights Liability.Keywords: environment, corporate, human rights, sustainable development
Procedia PDF Downloads 475466 Edible Active Antimicrobial Coatings onto Plastic-Based Laminates and Its Performance Assessment on the Shelf Life of Vacuum Packaged Beef Steaks
Authors: Andrey A. Tyuftin, David Clarke, Malco C. Cruz-Romero, Declan Bolton, Seamus Fanning, Shashi K. Pankaj, Carmen Bueno-Ferrer, Patrick J. Cullen, Joe P. Kerry
Abstract:
Prolonging of shelf-life is essential in order to address issues such as; supplier demands across continents, economical profit, customer satisfaction, and reduction of food wastage. Smart packaging solutions presented in the form of naturally occurred antimicrobially-active packaging may be a solution to these and other issues. Gelatin film forming solution with adding of natural sourced antimicrobials is a promising tool for the active smart packaging. The objective of this study was to coat conventional plastic hydrophobic packaging material with hydrophilic antimicrobial active beef gelatin coating and conduct shelf life trials on beef sub-primal cuts. Minimal inhibition concentration (MIC) of Caprylic acid sodium salt (SO) and commercially available Auranta FV (AFV) (bitter oranges extract with mixture of nutritive organic acids) were found of 1 and 1.5 % respectively against bacterial strains Bacillus cereus, Pseudomonas fluorescens, Escherichia coli, Staphylococcus aureus and aerobic and anaerobic beef microflora. Therefore SO or AFV were incorporated in beef gelatin film forming solution in concentration of two times of MIC which was coated on a conventional plastic LDPE/PA film on the inner cold plasma treated polyethylene surface. Beef samples were vacuum packed in this material and stored under chilling conditions, sampled at weekly intervals during 42 days shelf life study. No significant differences (p < 0.05) in the cook loss was observed among the different treatments compared to control samples until the day 29. Only for AFV coated beef sample it was 3% higher (37.3%) than the control (34.4 %) on the day 36. It was found antimicrobial films did not protect beef against discoloration. SO containing packages significantly (p < 0.05) reduced Total viable bacterial counts (TVC) compared to the control and AFV samples until the day 35. No significant reduction in TVC was observed between SO and AFV films on the day 42 but a significant difference was observed compared to control samples with a 1.40 log of bacteria reduction on the day 42. AFV films significantly (p < 0.05) reduced TVC compared to control samples from the day 14 until the day 42. Control samples reached the set value of 7 log CFU/g on day 27 of testing, AFV films did not reach this set limit until day 35 and SO films until day 42 of testing. The antimicrobial AFV and SO coated films significantly prolonged the shelf-life of beef steaks by 33 or 55% (on 7 and 14 days respectively) compared to control film samples. It is concluded antimicrobial coated films were successfully developed by coating the inner polyethylene layer of conventional LDPE/PA laminated films after plasma surface treatment. The results indicated that the use of antimicrobial active packaging coated with SO or AFV increased significantly (p < 0.05) the shelf life of the beef sub-primal. Overall, AFV or SO containing gelatin coatings have the potential of being used as effective antimicrobials for active packaging applications for muscle-based food products.Keywords: active packaging, antimicrobials, edible coatings, food packaging, gelatin films, meat science
Procedia PDF Downloads 303465 Numerical Analysis of Charge Exchange in an Opposed-Piston Engine
Authors: Zbigniew Czyż, Adam Majczak, Lukasz Grabowski
Abstract:
The paper presents a description of geometric models, computational algorithms, and results of numerical analyses of charge exchange in a two-stroke opposed-piston engine. The research engine was a newly designed internal Diesel engine. The unit is characterized by three cylinders in which three pairs of opposed-pistons operate. The engine will generate a power output equal to 100 kW at a crankshaft rotation speed of 3800-4000 rpm. The numerical investigations were carried out using ANSYS FLUENT solver. Numerical research, in contrast to experimental research, allows us to validate project assumptions and avoid costly prototype preparation for experimental tests. This makes it possible to optimize the geometrical model in countless variants with no production costs. The geometrical model includes an intake manifold, a cylinder, and an outlet manifold. The study was conducted for a series of modifications of manifolds and intake and exhaust ports to optimize the charge exchange process in the engine. The calculations specified a swirl coefficient obtained under stationary conditions for a full opening of intake and exhaust ports as well as a CA value of 280° for all cylinders. In addition, mass flow rates were identified separately in all of the intake and exhaust ports to achieve the best possible uniformity of flow in the individual cylinders. For the models under consideration, velocity, pressure and streamline contours were generated in important cross sections. The developed models are designed primarily to minimize the flow drag through the intake and exhaust ports while the mass flow rate increases. Firstly, in order to calculate the swirl ratio [-], tangential velocity v [m/s] and then angular velocity ω [rad / s] with respect to the charge as the mean of each element were calculated. The paper contains comparative analyses of all the intake and exhaust manifolds of the designed engine. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK "PZL-KALISZ" S.A." and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.Keywords: computational fluid dynamics, engine swirl, fluid mechanics, mass flow rates, numerical analysis, opposed-piston engine
Procedia PDF Downloads 197464 Evaluation of Commercial Back-analysis Package in Condition Assessment of Railways
Authors: Shadi Fathi, Moura Mehravar, Mujib Rahman
Abstract:
Over the years,increased demands on railways, the emergence of high-speed trains and heavy axle loads, ageing, and deterioration of the existing tracks, is imposing costly maintenance actions on the railway sector. The need for developing a fast andcost-efficient non-destructive assessment method for the structural evaluation of railway tracksis therefore critically important. The layer modulus is the main parameter used in the structural design and evaluation of the railway track substructure (foundation). Among many recently developed NDTs, Falling Weight Deflectometer (FWD) test, widely used in pavement evaluation, has shown promising results for railway track substructure monitoring. The surface deflection data collected by FWD are used to estimate the modulus of substructure layers through the back-analysis technique. Although there are different commerciallyavailableback-analysis programs are used for pavement applications, there are onlya limited number of research-based techniques have been so far developed for railway track evaluation. In this paper, the suitability, accuracy, and reliability of the BAKFAAsoftware are investigated. The main rationale for selecting BAKFAA as it has a relatively straightforward user interfacethat is freely available and widely used in highway and airport pavement evaluation. As part of the study, a finite element (FE) model of a railway track section near Leominsterstation, Herefordshire, UK subjected to the FWD test, was developed and validated against available field data. Then, a virtual experimental database (including 218 sets of FWD testing data) was generated using theFE model and employed as the measured database for the BAKFAA software. This database was generated considering various layers’ moduli for each layer of track substructure over a predefined range. The BAKFAA predictions were compared against the cone penetration test (CPT) data (available from literature; conducted near to Leominster station same section as the FWD was performed). The results reveal that BAKFAA overestimatesthe layers’ moduli of each substructure layer. To adjust the BAKFA with the CPT data, this study introduces a correlation model to make the BAKFAA applicable in railway applications.Keywords: back-analysis, bakfaa, railway track substructure, falling weight deflectometer (FWD), cone penetration test (CPT)
Procedia PDF Downloads 129463 Efficient Residual Road Condition Segmentation Network Based on Reconstructed Images
Authors: Xiang Shijie, Zhou Dong, Tian Dan
Abstract:
This paper focuses on the application of real-time semantic segmentation technology in complex road condition recognition, aiming to address the critical issue of how to improve segmentation accuracy while ensuring real-time performance. Semantic segmentation technology has broad application prospects in fields such as autonomous vehicle navigation and remote sensing image recognition. However, current real-time semantic segmentation networks face significant technical challenges and optimization gaps in balancing speed and accuracy. To tackle this problem, this paper conducts an in-depth study and proposes an innovative Guided Image Reconstruction Module. By resampling high-resolution images into a set of low-resolution images, this module effectively reduces computational complexity, allowing the network to more efficiently extract features within limited resources, thereby improving the performance of real-time segmentation tasks. In addition, a dual-branch network structure is designed in this paper to fully leverage the advantages of different feature layers. A novel Hybrid Attention Mechanism is also introduced, which can dynamically capture multi-scale contextual information and effectively enhance the focus on important features, thus improving the segmentation accuracy of the network in complex road condition. Compared with traditional methods, the proposed model achieves a better balance between accuracy and real-time performance and demonstrates competitive results in road condition segmentation tasks, showcasing its superiority. Experimental results show that this method not only significantly improves segmentation accuracy while maintaining real-time performance, but also remains stable across diverse and complex road conditions, making it highly applicable in practical scenarios. By incorporating the Guided Image Reconstruction Module, dual-branch structure, and Hybrid Attention Mechanism, this paper presents a novel approach to real-time semantic segmentation tasks, which is expected to further advance the development of this field.Keywords: hybrid attention mechanism, image reconstruction, real-time, road status recognition
Procedia PDF Downloads 23462 Solar Photovoltaic Foundation Design
Authors: Daniel John Avutia
Abstract:
Solar Photovoltaic (PV) development is reliant on the sunlight hours available in a particular region to generate electricity. A potential area is assessed through its inherent solar radiation intensity measured in watts per square meter. Solar energy development involves the feasibility, design, construction, operation and maintenance of the relevant infrastructure, but this paper will focus on the design and construction aspects. Africa and Australasia have the longest sunlight hours per day and the highest solar radiation per square meter, 7 sunlight hours/day and 5 kWh/day respectively. Solar PV support configurations consist of fixed-tilt support and tracker system structures, the differentiation being that the latter was introduced to improve the power generation efficiency of the former due to the sun tracking movement capabilities. The installation of Solar PV foundations involves rammed piles, drilling/grout piles and shallow raft reinforced concrete structures. This paper presents a case study of 2 solar PV projects in Africa and Australia, discussing the foundation design consideration and associated construction cost implications of the selected foundations systems. Solar PV foundations represent up to one fifth of the civil works costs in a project. Therefore, the selection of the most structurally sound and feasible foundation for the prevailing ground conditions is critical towards solar PV development. The design wind speed measured by anemometers govern the pile embedment depth for rammed and drill/grout foundation systems. The lateral pile deflection and vertical pull out resistance of piles increase proportionally with the embedment depth for uniform pile geometry and geology. The pile driving rate may also be used to anticipate the lateral resistance and skin friction restraining the pile. Rammed pile foundations are the most structurally suitable due to the pile skin friction and ease of installation in various geological conditions. The competitiveness of solar PV projects within the renewable energy mix is governed by lowering capital expenditure, improving power generation efficiency and power storage technological advances. The power generation reliability and efficiency are areas for further research within the renewable energy niche.Keywords: design, foundations, piles, solar
Procedia PDF Downloads 191461 Neuro-Fuzzy Approach to Improve Reliability in Auxiliary Power Supply System for Nuclear Power Plant
Authors: John K. Avor, Choong-Koo Chang
Abstract:
The transfer of electrical loads at power generation stations from Standby Auxiliary Transformer (SAT) to Unit Auxiliary Transformer (UAT) and vice versa is through a fast bus transfer scheme. Fast bus transfer is a time-critical application where the transfer process depends on various parameters, thus transfer schemes apply advance algorithms to ensure power supply reliability and continuity. In a nuclear power generation station, supply continuity is essential, especially for critical class 1E electrical loads. Bus transfers must, therefore, be executed accurately within 4 to 10 cycles in order to achieve safety system requirements. However, the main problem is that there are instances where transfer schemes scrambled due to inaccurate interpretation of key parameters; and consequently, have failed to transfer several critical loads from UAT to the SAT during main generator trip event. Although several techniques have been adopted to develop robust transfer schemes, a combination of Artificial Neural Network and Fuzzy Systems (Neuro-Fuzzy) has not been extensively used. In this paper, we apply the concept of Neuro-Fuzzy to determine plant operating mode and dynamic prediction of the appropriate bus transfer algorithm to be selected based on the first cycle of voltage information. The performance of Sequential Fast Transfer and Residual Bus Transfer schemes was evaluated through simulation and integration of the Neuro-Fuzzy system. The objective for adopting Neuro-Fuzzy approach in the bus transfer scheme is to utilize the signal validation capabilities of artificial neural network, specifically the back-propagation algorithm which is very accurate in learning completely new systems. This research presents a combined effect of artificial neural network and fuzzy systems to accurately interpret key bus transfer parameters such as magnitude of the residual voltage, decay time, and the associated phase angle of the residual voltage in order to determine the possibility of high speed bus transfer for a particular bus and the corresponding transfer algorithm. This demonstrates potential for general applicability to improve reliability of the auxiliary power distribution system. The performance of the scheme is implemented on APR1400 nuclear power plant auxiliary system.Keywords: auxiliary power system, bus transfer scheme, fuzzy logic, neural networks, reliability
Procedia PDF Downloads 171460 Regional Barriers and Opportunities for Developing Innovation Networks in the New Media Industry: A Comparison between Beijing and Bangalore Regional Innovation Systems
Authors: Cristina Chaminade, Mandar Kulkarni, Balaji Parthasarathy, Monica Plechero
Abstract:
The characteristics of a regional innovation system (RIS) and the specificity of the knowledge base of an industry may contribute to create peculiar paths for innovation and development of firms’ geographic extended innovation networks. However, the relative empirical evidence in emerging economies remains underexplored. The paper aims to fill the research gap by means of some recent qualitative research conducted in 2016 in Beijing (China) and Bangalore (India). It analyzes cases studies of firms in the new media industry, a sector that merges different IT competences with competences from other knowledge domains and that is emerging in those RIS. The results show that while in Beijing the new media sector results to be more in line with the existing institutional setting and governmental goals aimed at targeting specific social aspects and social problems of the population, in Bangalore it remains a more spontaneous firms-led process. In Beijing what matters for the development of innovation networks is the governmental setting and the national and regional strategies to promote science and technology in this sector, internet and mass innovation. The peculiarities of recent governmental policies aligned to the domestic goals may provide good possibilities for start-ups to develop innovation networks. However, due to the specificities of those policies targeting the Chinese market, networking outside the domestic market are not so promoted. Moreover, while some institutional peculiarities, such as a culture of collaboration in the region, may be favorable for local networking, regulations related to Internet censorship may limit the use of global networks particularly when based on virtual spaces. Mainly firms with already some foreign experiences and contact take advantage of global networks. In Bangalore, the role of government in pushing networking for the new media industry at the present stage is quite absent at all geographical levels. Indeed there is no particular strategic planning or prioritizing in the region toward the new media industry, albeit one industrial organization has emerged to represent the animation industry interests. This results in a lack of initiatives for sustaining the integration of complementary knowledge into the local portfolio of IT specialization. Firms actually involved in the new media industry face institutional constrains related to a poor level of local trust and cooperation, something that does not allow for full exploitation of local linkages. Moreover, knowledge-provider organizations in Bangalore remain still a solid base for the IT domain, but not for other domains. Initiatives to link to international networks seem therefore more the result of individual entrepreneurial actions aimed at acquiring complementary knowledge and competencies from different domains and exploiting potentiality in different markets. From those cases, it emerges that role of government, soft institutions and organizations in the two RIS differ substantially in the creation of barriers and opportunities for the development of innovation networks and their specific aim.Keywords: regional innovation system, emerging economies, innovation network, institutions, organizations, Bangalore, Beijing
Procedia PDF Downloads 323459 The Feminism of Data Privacy and Protection in Africa
Authors: Olayinka Adeniyi, Melissa Omino
Abstract:
The field of data privacy and data protection in Africa is still an evolving area, with many African countries yet to enact legislation on the subject. While African Governments are bringing their legislation to speed in this field, how patriarchy pervades every sector of African thought and manifests in society needs to be considered. Moreover, the laws enacted ought to be inclusive, especially towards women. This, in a nutshell, is the essence of data feminism. Data feminism is a new way of thinking about data science and data ethics that is informed by the ideas of intersectional feminism. Feminising data privacy and protection will involve thinking women, considering women in the issues of data privacy and protection, particularly in legislation, as is the case in this paper. The line of thought of women inclusion is not uncommon when even international and regional human rights specific for women only came long after the general human rights. The consideration is that these should have been inserted or rather included in the original general instruments in the first instance. Since legislation on data privacy is coming in this century, having seen the rights and shortcomings of earlier instruments, then the cue should be taken to ensure inclusive wholistic legislation for data privacy and protection in the first instance. Data feminism is arguably an area that has been scantily researched, albeit a needful one. With the spate of increase in the violence against women spiraling in the cyber world, compounding the issue of COVID-19 and the needful response of governments, and the effect of these on women and their rights, fast forward, the research on the feminism of data privacy and protection in Africa becomes inevitable. This paper seeks to answer the questions, what is data feminism in the African context, why is it important in the issue of data privacy and protection legislation; what are the laws, if any, existing on data privacy and protection in Africa, are they women inclusive, if not, why; what are the measures put in place for the privacy and protection of women in Africa, and how can this be made possible. The paper aims to investigate the issue of data privacy and protection in Africa, the legal framework, and the protection or provision that it has for women if any. It further aims to research the importance and necessity of feminizing data privacy and protection, the effect of lack of it, the challenges or bottlenecks in attaining this feat and the possibilities of accessing data privacy and protection for African women. The paper also researches the emerging practices of data privacy and protection of women in other jurisprudences. It approaches the research through the methodology of review of papers, analysis of laws, and reports. It seeks to contribute to the existing literature in the field and is explorative in its suggestion. It suggests a draft of some clauses to make any data privacy and protection legislation women inclusive. It would be useful for policymaking, academic, and public enlightenment.Keywords: feminism, women, law, data, Africa
Procedia PDF Downloads 205458 Exploiting the Potential of Fabric Phase Sorptive Extraction for Forensic Food Safety: Analysis of Food Samples in Cases of Drug Facilitated Crimes
Authors: Bharti Jain, Rajeev Jain, Abuzar Kabir, Torki Zughaibi, Shweta Sharma
Abstract:
Drug-facilitated crimes (DFCs) entail the use of a single drug or a mixture of drugs to render a victim unable. Traditionally, biological samples have been gathered from victims and conducted analysis to establish evidence of drug administration. Nevertheless, the rapid metabolism of various drugs and delays in analysis can impede the identification of such substances. For this, the present article describes a rapid, sustainable, highly efficient and miniaturized protocol for the identification and quantification of three sedative-hypnotic drugs, namely diazepam, chlordiazepoxide and ketamine in alcoholic beverages and complex food samples (cream of biscuit, flavored milk, juice, cake, tea, sweets and chocolate). The methodology involves utilizing fabric phase sorptive extraction (FPSE) to extract diazepam (DZ), chlordiazepoxide (CDP), and ketamine (KET). Subsequently, the extracted samples are subjected to analysis using gas chromatography-mass spectrometry (GC-MS). Several parameters, including the type of membrane, pH, agitation time and speed, ionic strength, sample volume, elution volume and time, and type of elution solvent, were screened and thoroughly optimized. Sol-gel Carbowax 20M (CW-20M) has demonstrated the most effective extraction efficiency for the target analytes among all evaluated membranes. Under optimal conditions, the method displayed linearity within the range of 0.3–10 µg mL–¹ (or µg g–¹), exhibiting a coefficient of determination (R2) ranging from 0.996–0.999. The limits of detection (LODs) and limits of quantification (LOQs) for liquid samples range between 0.020-0.069 µg mL-¹ and 0.066-0.22 µg mL-¹, respectively. Correspondingly, the LODs for solid samples ranged from 0.056-0.090 µg g-¹, while the LOQs ranged from 0.18-0.29 µg g-¹. Notably, the method showcased better precision, with repeatability and reproducibility both below 5% and 10%, respectively. Furthermore, the FPSE-GC-MS method proved effective in determining diazepam (DZ) in forensic food samples connected to drug-facilitated crimes (DFCs). Additionally, the proposed method underwent evaluation for its whiteness using the RGB12 algorithm.Keywords: drug facilitated crime, fabric phase sorptive extraction, food forensics, white analytical chemistry
Procedia PDF Downloads 70457 Cost Analysis of Neglected Tropical Disease in Nigeria: Implication for Programme Control and Elimination
Authors: Lawong Damian Bernsah
Abstract:
Neglected Tropical Diseases (NTDs) are most predominant among the poor and rural populations and are endemic in 149 countries. These diseases are the most prevalent and responsible for infecting 1.4 billion people worldwide. There are 17 neglected tropical diseases recognized by WHO that constitute the fourth largest disease health and economic burden of all communicable diseases. Five of these 17 diseases are considered for the cost analysis of this paper: lymphatic filariasis, onchocerciasis, trachoma, schistosomiasis, and soil transmitted helminth infections. WHO has proposed a roadmap for eradication and elimination by 2020 and treatments have been donated through the London Declaration by pharmaceutical manufacturers. The paper estimates the cost of NTD control programme and elimination for each NTD disease and total in Nigeria. This is necessary as it forms the bases upon which programme budget and expenditure could be based. Again, given the opportunity cost the resources for NTD face it is necessary to estimate the cost so as to provide bases for comparison. Cost of NTDs control and elimination programme is estimated using the population at risk for each NTD diseases and for the total. The population at risk is gotten from the national master plan for the 2015 - 2020, while the cost per person was gotten for similar studies conducted in similar settings and ranges from US$0.1 to US$0.5 for Mass Administration of Medicine (MAM) and between US$1 to US$1.5 for each NTD disease. The combined cost for all the NTDs was estimated to be US$634.88 million for the period 2015-2020 and US$1.9 billion for each NTD disease for the same period. For the purpose of sensitivity analysis and for robustness of the analysis the cost per person was varied and all were still high. Given that health expenditure for Nigeria (% of GDP) averages 3.5% for the period 1995-2014, it is very clear that efforts have to be made to improve allocation to the health sector in general which is hoped could trickle to NTDs control and elimination. Thus, the government and the donor partners would need to step-up budgetary allocation and also to be aware of the costs of NTD control and elimination programme since they have alternative uses. Key Words: Neglected Tropical Disease, Cost Analysis, NTD Programme Control and Elimination, Cost per PersonKeywords: Neglected Tropical Disease, Cost Analysis, Neglected Tropical Disease Programme Control and Elimination, Cost per Person
Procedia PDF Downloads 273456 Investigating the Effects of Cylinder Disablement on Diesel Engine Fuel Economy and Exhaust Temperature Management
Authors: Hasan Ustun Basaran
Abstract:
Diesel engines are widely used in transportation sector due to their high thermal efficiency. However, they also release high rates of NOₓ and PM (particulate matter) emissions into the environment which have hazardous effects on human health. Therefore, environmental protection agencies have issued strict emission regulations on automotive diesel engines. Recently, these regulations are even increasingly strengthened. Engine producers search novel on-engine methods such as advanced combustion techniques, utilization of renewable fuels, exhaust gas recirculation, advanced fuel injection methods or use exhaust after-treatment (EAT) systems in order to reduce emission rates on diesel engines. Although those aforementioned on-engine methods are effective to curb emission rates, they result in inefficiency or cannot decrease emission rates satisfactorily at all operating conditions. Therefore, engine manufacturers apply both on-engine techniques and EAT systems to meet the stringent emission norms. EAT systems are highly effective to diminish emission rates, however, they perform inefficiently at low loads due to low exhaust gas temperatures (below 250°C). Therefore, the objective of this study is to demonstrate that engine-out temperatures can be elevated above 250°C at low-loaded cases via cylinder disablement. The engine studied and modeled via Lotus Engine Simulation (LES) software is a six-cylinder turbocharged and intercooled diesel engine. Exhaust temperatures and mass flow rates are predicted at 1200 rpm engine speed and several low loaded conditions using LES program. It is seen that cylinder deactivation results in a considerable exhaust temperature rise (up to 100°C) at low loads which ensures effective EAT management. The method also improves fuel efficiency through reduced total pumping loss. Decreased total air induction due to inactive cylinders is thought to be responsible for improved engine pumping loss. The technique reduces exhaust gas flow rate as air flow is cut off on disabled cylinders. Still, heat transfer rates to the after-treatment catalyst bed do not decrease that much since exhaust temperatures are increased sufficiently. Simulation results are promising; however, further experimental studies are needed to identify the true potential of the method on fuel consumption and EAT improvement.Keywords: cylinder disablement, diesel engines, exhaust after-treatment, exhaust temperature, fuel efficiency
Procedia PDF Downloads 176455 Detection and Quantification of Viable but Not Culturable Vibrio Parahaemolyticus in Frozen Bivalve Molluscs
Authors: Eleonora Di Salvo, Antonio Panebianco, Graziella Ziino
Abstract:
Background: Vibrio parahaemolyticus is a human pathogen that is widely distributed in marine environments. It is frequently isolated from raw seafood, particularly shellfish. Consumption of raw or undercooked seafood contaminated with V. parahaemolyticus may lead to acute gastroenteritis. Vibrio spp. has excellent resistance to low temperatures so it can be found in frozen products for a long time. Recently, the viable but non-culturable state (VBNC) of bacteria has attracted great attention, and more than 85 species of bacteria have been demonstrated to be capable of entering this state. VBNC cells cannot grow in conventional culture medium but are viable and maintain metabolic activity, which may constitute an unrecognized source of food contamination and infection. Also V. parahaemolyticus could exist in VBNC state under nutrient starvation or low-temperature conditions. Aim: The aim of the present study was to optimize methods and investigate V. parahaemolyticus VBNC cells and their presence in frozen bivalve molluscs, regularly marketed. Materials and Methods: propidium monoazide (PMA) was integrated with real-time polymerase chain reaction (qPCR) targeting the tl gene to detect and quantify V. parahaemolyticus in the VBNC state. PMA-qPCR resulted highly specific to V. parahaemolyticus with a limit of detection (LOD) of 10-1 log CFU/mL in pure bacterial culture. A standard curve for V. parahaemolyticus cell concentrations was established with the correlation coefficient of 0.9999 at the linear range of 1.0 to 8.0 log CFU/mL. A total of 77 samples of frozen bivalve molluscs (35 mussels; 42 clams) were subsequently subjected to the qualitative (on alkaline phosphate buffer solution) and quantitative research of V. parahaemolyticus on thiosulfate-citrate-bile salts-sucrose (TCBS) agar (DIFCO) NaCl 2.5%, and incubation at 30°C for 24-48 hours. Real-time PCR was conducted on homogenate samples, in duplicate, with and without propidium monoazide (PMA) dye, and exposed for 45 min under halogen lights (650 W). Total DNA was extracted from cell suspension in homogenate samples according to bolliture protocol. The Real-time PCR was conducted with species-specific primers for V. parahaemolitycus. The RT-PCR was performed in a final volume of 20 µL, containing 10 µL of SYBR Green Mixture (Applied Biosystems), 2 µL of template DNA, 2 µL of each primer (final concentration 0.6 mM), and H2O 4 µL. The qPCR was carried out on CFX96 TouchTM (Bio-Rad, USA). Results: All samples were negative both to the quantitative and qualitative detection of V. parahaemolyticus by the classical culturing technique. The PMA-qPCR let us individuating VBNC V. parahaemolyticus in the 20,78% of the samples evaluated with a value between the Log 10-1 and Log 10-3 CFU/g. Only clams samples were positive for PMA-qPCR detection. Conclusion: The present research is the first evaluating PMA-qPCR assay for detection of VBNC V. parahaemolyticus in bivalve molluscs samples, and the used method was applicable to the rapid control of marketed bivalve molluscs. We strongly recommend to use of PMA-qPCR in order to identify VBNC forms, undetectable by the classic microbiological methods. A precise knowledge of the V.parahaemolyticus in a VBNC form is fundamental for the correct risk assessment not only in bivalve molluscs but also in other seafood.Keywords: food safety, frozen bivalve molluscs, PMA dye, Real-time PCR, VBNC state, Vibrio parahaemolyticus
Procedia PDF Downloads 139454 Investigating the Sloshing Characteristics of a Liquid by Using an Image Processing Method
Authors: Ufuk Tosun, Reza Aghazadeh, Mehmet Bülent Özer
Abstract:
This study puts forward a method to analyze the sloshing characteristics of liquid in a tuned sloshing absorber system by using image processing tools. Tuned sloshing vibration absorbers have recently attracted researchers’ attention as a seismic load damper in constructions due to its practical and logistical convenience. The absorber is liquid which sloshes and applies a force in opposite phase to the motion of structure. Experimentally characterization of the sloshing behavior can be utilized as means of verifying the results of numerical analysis. It can also be used to identify the accuracy of assumptions related to the motion of the liquid. There are extensive theoretical and experimental studies in the literature related to the dynamical and structural behavior of tuned sloshing dampers. In most of these works there are efforts to estimate the sloshing behavior of the liquid such as free surface motion and total force applied by liquid to the wall of container. For these purposes the use of sensors such as load cells and ultrasonic sensors are prevalent in experimental works. Load cells are only capable of measuring the force and requires conducting tests both with and without liquid to obtain pure sloshing force. Ultrasonic level sensors give point-wise measurements and hence they are not applicable to measure the whole free surface motion. Furthermore, in the case of liquid splashing it may give incorrect data. In this work a method for evaluating the sloshing wave height by using camera records and image processing techniques is presented. In this method the motion of the liquid and its container, made of a transparent material, is recorded by a high speed camera which is aligned to the free surface of the liquid. The video captured by the camera is processed frame by frame by using MATLAB Image Processing toolbox. The process starts with cropping the desired region. By recognizing the regions containing liquid and eliminating noise and liquid splashing, the final picture depicting the free surface of liquid is achieved. This picture then is used to obtain the height of the liquid through the length of container. This process is verified by ultrasonic sensors that measured fluid height on the surface of liquid.Keywords: fluid structure interaction, image processing, sloshing, tuned liquid damper
Procedia PDF Downloads 344453 Room Temperature Sensitive Broadband Terahertz Photo Response Using Platinum Telluride Based Devices
Authors: Alka Jakhar, Harmanpreet Kaur Sandhu, Samaresh Das
Abstract:
The Terahertz (THz) technology-based devices are heightening at an alarming rate on account of the wide range of applications in imaging, security, communication, and spectroscopic field. The various available room operational THz detectors, including Golay cell, pyroelectric detector, field-effect transistors, and photoconductive antennas, have some limitations such as narrow-band response, slow response speed, transit time limits, and complex fabrication process. There is an urgent demand to explore new materials and device structures to accomplish efficient THz detection systems. Recently, TMDs including topological semimetals and topological insulators such as PtSe₂, MoTe₂, WSe₂, and PtTe₂ provide novel feasibility for photonic and optical devices. The peculiar properties of these materials, such as Dirac cone, fermions presence, nonlinear optical response, high conductivity, and ambient stability, make them worthy for the development of the THz devices. Here, the platinum telluride (PtTe₂) based devices have been demonstrated for THz detection in the frequency range of 0.1-1 THz. The PtTe₂ is synthesized by direct selenization of the sputtered platinum film on the high-resistivity silicon substrate by using the chemical vapor deposition (CVD) method. The Raman spectra, XRD, and XPS spectra confirm the formation of the thin PtTe₂ film. The PtTe₂ channel length is 5µm and it is connected with a bow-tie antenna for strong THz electric field confinement in the channel. The characterization of the devices has been carried out in a wide frequency range from 0.1-1 THz. The induced THz photocurrent is measured by using lock-in-amplifier after preamplifier. The maximum responsivity is achieved up to 1 A/W under self-biased mode. Further, this responsivity has been increased by applying biasing voltage. This photo response corresponds to low energy THz photons is mainly due to the photo galvanic effect in PtTe₂. The DC current is induced along the PtTe₂ channel, which is directly proportional to the amplitude of the incident THz electric field. Thus, these new topological semimetal materials provide new pathways for sensitive detection and sensing applications in the THz domain.Keywords: terahertz, detector, responsivity, topological-semimetals
Procedia PDF Downloads 161452 Monitoring of Formaldehyde over Punjab Pakistan Using Car Max-Doas and Satellite Observation
Authors: Waqas Ahmed Khan, Faheem Khokhaar
Abstract:
Air pollution is one of the main perpetrators of climate change. GHGs cause melting of glaciers and cause change in temperature and heavy rain fall many gasses like Formaldehyde is not direct precursor that damage ozone like CO2 or Methane but Formaldehyde (HCHO) form glyoxal (CHOCHO) that has effect on ozone. Countries around the globe have unique air quality monitoring protocols to describe local air pollution. Formaldehyde is a colorless, flammable, strong-smelling chemical that is used in building materials and to produce many household products and medical preservatives. Formaldehyde also occurs naturally in the environment. It is produced in small amounts by most living organisms as part of normal metabolic processes. Pakistan lacks the monitoring facilities on larger scale to measure the atmospheric gasses on regular bases. Formaldehyde is formed from Glyoxal and effect mountain biodiversity and livelihood. So its monitoring is necessary in order to maintain and preserve biodiversity. Objective: Present study is aimed to measure atmospheric HCHO vertical column densities (VCDs) obtained from ground-base and compute HCHO data in Punjab and elevated areas (Rawalpindi & Islamabad) by satellite observation during the time period of 2014-2015. Methodology: In order to explore the spatial distributing of H2CO, various fields campaigns including international scientist by using car Max-Doas. Major focus was on the cities along national highways and industrial region of Punjab Pakistan. Level 2 data product of satellite instruments OMI retrieved by differential optical absorption spectroscopy (DOAS) technique are used. Spatio-temporal distribution of HCHO column densities over main cities and region of Pakistan has been discussed. Results: Results show the High HCHO column densities exceeding permissible limit over the main cities of Pakistan particularly the areas with rapid urbanization and enhanced economic growth. The VCDs value over elevated areas of Pakistan like Islamabad, Rawalpindi is around 1.0×1016 to 34.01×1016 Molecules’/cm2. While Punjab has values revolving around the figure 34.01×1016. Similarly areas with major industrial activity showed high amount of HCHO concentrations. Tropospheric glyoxal VCDs were found to be 4.75 × 1015 molecules/cm2. Conclusion: Results shows that monitoring site surrounded by Margalla hills (Islamabad) have higher concentrations of Formaldehyde. Wind data shows that industrial areas and areas having high economic growth have high values as they provide pathways for transmission of HCHO. Results obtained from this study would help EPA, WHO and air protection departments in order to monitor air quality and further preservation and restoration of mountain biodiversity.Keywords: air quality, formaldehyde, Max-Doas, vertical column densities (VCDs), satellite instrument, climate change
Procedia PDF Downloads 212451 Modernization of Translation Studies Curriculum at Higher Education Level in Armenia
Authors: A. Vahanyan
Abstract:
The paper touches upon the problem of revision and modernization of the current curriculum on translation studies at the Armenian Higher Education Institutions (HEIs). In the contemporary world where quality and speed of services provided are mostly valued, certain higher education centers in Armenia though do not demonstrate enough flexibility in terms of the revision and amendment of courses taught. This issue is present for various curricula at the university level and Translation Studies related curriculum, in particular. Technological innovations that are of great help for translators have been long ago smoothly implemented into the global Translation Industry. According to the European Master's in Translation (EMT) framework, translation service provision comprises linguistic, intercultural, information mining, thematic, and technological competencies. Therefore, to form the competencies mentioned above, the curriculum should be seriously restructured to meet the modern education and job market requirements, relevant courses should be proposed. New courses, in particular, should focus on the formation of technological competences. These suggestions have been made upon the author’s research of the problem across various HEIs in Armenia. The updated curricula should include courses aimed at familiarization with various computer-assisted translation (CAT) tools (MemoQ, Trados, OmegaT, Wordfast, etc.) in the translation process, creation of glossaries and termbases compatible with different platforms), which will ensure consistency in translation of similar texts and speeding up the translation process itself. Another aspect that may be strengthened via curriculum modification is the introduction of interdisciplinary and Project-Based Learning courses, which will enable info mining and thematic competences, which are of great importance as well. Of course, the amendment of the existing curriculum with the mentioned courses will require corresponding faculty development via training, workshops, and seminars. Finally, the provision of extensive internship with translation agencies is strongly recommended as it will ensure the synthesis of theoretical background and practical skills highly required for the specific area. Summing up, restructuring and modernization of the existing curricula on Translation Studies should focus on three major aspects, i.e., introduction of new courses that meet the global quality standards of education, professional development for faculty, and integration of extensive internship supervised by experts in the field.Keywords: competencies, curriculum, modernization, technical literacy, translation studies
Procedia PDF Downloads 131450 Scientific and Regulatory Challenges of Advanced Therapy Medicinal Products
Authors: Alaa Abdellatif, Gabrièle Breda
Abstract:
Background. Advanced therapy medicinal products (ATMPs) are innovative therapies that mainly target orphan diseases and high unmet medical needs. ATMP includes gene therapy medicinal products (GTMP), somatic cell therapy medicinal products (CTMP), and tissue-engineered therapies (TEP). Since legislation opened the way in 2007, 25 ATMPs have been approved in the EU, which is about the same amount as the U.S. Food and Drug Administration. However, not all of the ATMPs that have been approved have successfully reached the market and retained their approval. Objectives. We aim to understand all the factors limiting the market access to very promising therapies in a systemic approach, to be able to overcome these problems, in the future, with scientific, regulatory and commercial innovations. Further to recent reviews that focus either on specific countries, products, or dimensions, we will address all the challenges faced by ATMP development today. Methodology. We used mixed methods and a multi-level approach for data collection. First, we performed an updated academic literature review on ATMP development and their scientific and market access challenges (papers published between 2018 and April 2023). Second, we analyzed industry feedback from cell and gene therapy webinars and white papers published by providers and pharmaceutical industries. Finally, we established a comparative analysis of the regulatory guidelines published by EMA and the FDA for ATMP approval. Results: The main challenges in bringing these therapies to market are the high development costs. Developing ATMPs is expensive due to the need for specialized manufacturing processes. Furthermore, the regulatory pathways for ATMPs are often complex and can vary between countries, making it challenging to obtain approval and ensure compliance with different regulations. As a result of the high costs associated with ATMPs, challenges in obtaining reimbursement from healthcare payers lead to limited patient access to these treatments. ATMPs are often developed for orphan diseases, which means that the patient population is limited for clinical trials which can make it challenging to demonstrate their safety and efficacy. In addition, the complex manufacturing processes required for ATMPs can make it challenging to scale up production to meet demand, which can limit their availability and increase costs. Finally, ATMPs face safety and efficacy challenges: dangerous adverse events of these therapies like toxicity related to the use of viral vectors or cell therapy, starting material and donor-related aspects. Conclusion. As a result of our mixed method analysis, we found that ATMPs face a number of challenges in their development, regulatory approval, and commercialization and that addressing these challenges requires collaboration between industry, regulators, healthcare providers, and patient groups. This first analysis will help us to address, for each challenge, proper and innovative solution(s) in order to increase the number of ATMPs approved and reach the patientsKeywords: advanced therapy medicinal products (ATMPs), product development, market access, innovation
Procedia PDF Downloads 76449 Effects of Lower and Upper Body Plyometric Training on Electrocardiogram Parameters of University Athletes
Authors: T. N. Uzor, C. O. Akosile, G. O. Emeahara
Abstract:
Plyometric training is a form of specialised strength training that uses fast muscular contractions to improve power and speed in sports conditioning by coaches and athletes. Despite its useful role in sports conditioning programme, the information about plyometric training on the athletes cardiovascular health especially Electrocardiogram (ECG) has not been established in the literature. The purpose of the study was to determine the effects of lower and upper body plyometric training on ECG of athletes. The study was guided by three null hypotheses. Quasi–experimental research design was adopted for the study. Seventy-two university male athletes constituted the population of the study. Thirty male athletes aged 18 to 24 years volunteered to participate in the study, but only twenty-three completed the study. The volunteered athletes were apparently healthy, physically active and free of any lower and upper extremity bone injuries for past one year and they had no medical or orthopedic injuries that may affect their participation in the study. Ten subjects were purposively assigned to one of the three groups: lower body plyometric training (LBPT), upper body plyometric training (UBPT), and control (C). Training consisted of six plyometric exercises: lower (ankle hops, squat jumps, tuck jumps) and upper body plyometric training (push-ups, medicine ball-chest throws and side throws) with moderate intensity. The general data were collated and analysed using Statistical Package for Social Science (SPSS version 22.0). The research questions were answered using mean and standard deviation, while paired samples t-test was also used to test for the hypotheses. The results revealed that athletes who were trained using LBPT had reduced ECG parameters better than those in the control group. The results also revealed that athletes who were trained using both LBPT and UBPT indicated lack of significant differences following ten weeks plyometric training than those in the control group in the ECG parameters except in Q wave, R wave and S wave (QRS) complex. Based on the findings of the study, it was recommended among others that coaches should include both LBPT and UBPT as part of athletes’ overall training programme from primary to tertiary institution to optimise performance as well as reduce the risk of cardiovascular diseases and promotes good healthy lifestyle.Keywords: concentric, eccentric, electrocardiogram, plyometric
Procedia PDF Downloads 143448 Parents, Carers and Young Persons’ Views Regarding Nursing ‘Workarounds’ Within Clinical Electronic Patient Record Systems
Authors: Patrick Nurse, Professor Neil Sebire, Polly Livermore
Abstract:
The use of digital systems in healthcare is now highly prevalent. With further advancement of technology, these systems will become increasingly utilised within the healthcare sector. Therefore understanding how clinicians (for example, doctors, nurses) interact with technology and digital systems is critical to making care safer. Seven members from the Parent/Carers’ Research Advisory Group and the Young-Persons’ Research Group at a healthcare Trust in London and three staff members contributed to an engagement workshop to assess the impact of digital systems on the practice of nurses. The group also advised on the viability of a research study to investigate this further. A wide range of issues within digital system implementation in healthcare were raised, such as ‘workarounds’, system’s training, and upkeep and regulation of usage, which all emerged as early themes during the discussion. Further discussion focused on the subject of escalation of issues, ‘workarounds’, and problem solving. While challenging to implement, digital systems are hugely beneficial to healthcare providers. The workshop indicated that there is scope for investigation of the prevalence, nature, and escalation of ‘workarounds’, this was of key interest to the advisory group. An interesting concern of the group was their worry from a patient and parental perspective regarding how nurses might feel when needing to complete a ‘workaround’ during a busy shift. This is especially relevant if the reasons to complete the ‘workaround’ were outside the nurse’s control, driven by clinical need and urgency of care. This showed the level of insight that those using healthcare services have into the reality of workflows of those providing care. Additionally, it reflects the desire for patients and families to understand more about the administration and methodology of their care. Future study should be dedicated to understanding why nurses deploy ‘workarounds’, as well as their perspective and experience of them and subsequent escalation through leadership hierarchiesKeywords: patient engagement/involvement, workarounds, medication-administration, digital systems
Procedia PDF Downloads 90447 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery
Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong
Abstract:
The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition
Procedia PDF Downloads 289446 Structural Property and Mechanical Behavior of Polypropylene–Elemental Sulfur (S8) Composites: Effect of Sulfur Loading
Authors: S. Vijay Kumar, Kishore K. Jena, Saeed M. Alhassan
Abstract:
Elemental sulfur is currently produced on the level of 70 million tons annually by petroleum refining, majority of which is used in the production of sulfuric acid, fertilizer and other chemicals. Still, over 6 million tons of elemental sulfur is generated in excess, which creates exciting opportunities to develop new chemistry to utilize sulfur as a feedstock for polymers. Development of new polymer composite materials using sulfur is not widely explored and remains an important challenge in the field. Polymer nanocomposites prepared by carbon nanotube, graphene, silica and other nanomaterials were well established. However, utilization of sulfur as filler in the polymer matrix could be an interesting study. This work is to presents the possibility of utilizing elemental sulfur as reinforcing fillers in the polymer matrix. In this study we attempted to prepare polypropylene/sulfur nanocomposite. The physical, mechanical and morphological properties of the newly developed composites were studied according to the sulfur loading. In the sample preparation, four levels of elemental sulfur loading (5, 10, 20 and 30 wt. %) were designed. Composites were prepared by the melt mixing process by using laboratory scale mini twin screw extruder at 180°C for 15 min. The reaction time and temperature were maintained constant for all prepared composites. The structure and crystallization behavior of composites was investigated by Raman, FTIR, XRD and DSC analysis. It was observed that sulfur interfere with the crystalline arrangement of polypropylene and depresses the crystallization, which affects the melting point, mechanical and thermal stability. In the tensile test, one level of test temperature (room temperature) and crosshead speed (10 mm/min) was designed. Tensile strengths and tensile modulus of the composites were slightly decreased with increasing in filler loading, however, percentage of elongation improved by more than 350% compared to neat polypropylene. The effect of sulfur on the morphology of polypropylene was studied with TEM and SEM techniques. Microscope analysis revels that sulfur is homogeneously dispersed in polymer matrix and behaves as single phase arrangement in the polymer. The maximum elongation for the polypropylene can be achieved by adjusting the sulfur loading in the polymer. This study reviles the possibility of using elemental sulfur as a solid plasticizer in the polypropylene matrix.Keywords: crystallization, elemental sulfur, morphology, thermo-mechanical properties, polypropylene, polymer nanocomposites
Procedia PDF Downloads 345445 Engineering Analysis for Fire Safety Using Computational Fluid Dynamic (CFD)
Authors: Munirajulu M, Srikanth Modem
Abstract:
A large cricket stadium with the capacity to accommodate several thousands of spectators has the seating arena consisting of a two-tier arrangement with an upper and a lower bowl and an intermediate concourse podium level for pedestrian movement to access the bowls. The uniqueness of the stadium is that spectators can have an unobstructed view from all around the podium towards the field of play. Upper and lower bowls are connected by stairs. The stairs landing is a precast slab supported by cantilevered steel beams. These steel beams are fixed to precast columns supporting the stadium structure. The stair slabs are precast concrete supported on a landing slab and cantilevered steel beams. During an event of a fire at podium level between two staircases, fire resistance of steel beams is very critical to life safety. If the steel beam loses its strength due to lack of fire resistance, it will be weak in supporting stair slabs and may lead to a hazard in evacuating occupants from the upper bowl to the lower bowl. In this study, to ascertain fire rating and life safety, a performance-based design using CFD analysis is used to evaluate the steel beams' fire resistance. A fire size of 3.5 MW (convective heat output of fire) with a wind speed of 2.57 m/s is considered for fire and smoke simulation. CFD results show that the smoke temperature near the staircase/ around the staircase does not exceed 1500 C for the fire duration considered. The surface temperature of cantilevered steel beams is found to be less than or equal to 1500 C. Since this temperature is much less than the critical failure temperature of steel (5200 C), it is concluded that the design of structural steel supports on the staircase is adequate and does not need additional fire protection such as fire-resistant coating. CFD analysis provided an engineering basis for the performance-based design of steel structural elements and an opportunity to optimize fire protection requirements. Thus, performance-based design using CFD modeling and simulation of fire and smoke is an innovative way to evaluate fire rating requirements, ascertain life safety and optimize the design with regard to fire protection on structural steel elements.Keywords: fire resistance, life safety, performance-based design, CFD analysis
Procedia PDF Downloads 192444 Influence of Long-Term Variability in Atmospheric Parameters on Ocean State over the Head Bay of Bengal
Authors: Anindita Patra, Prasad K. Bhaskaran
Abstract:
The atmosphere-ocean is a dynamically linked system that influences the exchange of energy, mass, and gas at the air-sea interface. The exchange of energy takes place in the form of sensible heat, latent heat, and momentum commonly referred to as fluxes along the atmosphere-ocean boundary. The large scale features such as El Nino and Southern Oscillation (ENSO) is a classic example on the interaction mechanism that occurs along the air-sea interface that deals with the inter-annual variability of the Earth’s Climate System. Most importantly the ocean and atmosphere as a coupled system acts in tandem thereby maintaining the energy balance of the climate system, a manifestation of the coupled air-sea interaction process. The present work is an attempt to understand the long-term variability in atmospheric parameters (from surface to upper levels) and investigate their role in influencing the surface ocean variables. More specifically the influence of atmospheric circulation and its variability influencing the mean Sea Level Pressure (SLP) has been explored. The study reports on a critical examination of both ocean-atmosphere parameters during a monsoon season over the head Bay of Bengal region. A trend analysis has been carried out for several atmospheric parameters such as the air temperature, geo-potential height, and omega (vertical velocity) for different vertical levels in the atmosphere (from surface to the troposphere) covering a period from 1992 to 2012. The Reanalysis 2 dataset from the National Centers for Environmental Prediction-Department of Energy (NCEP-DOE) was used in this study. The study signifies that the variability in air temperature and omega corroborates with the variation noticed in geo-potential height. Further, the study advocates that for the lower atmosphere the geo-potential heights depict a typical east-west contrast exhibiting a zonal dipole behavior over the study domain. In addition, the study clearly brings to light that the variations over different levels in the atmosphere plays a pivotal role in supporting the observed dipole pattern as clearly evidenced from the trends in SLP, associated surface wind speed and significant wave height over the study domain.Keywords: air temperature, geopotential height, head Bay of Bengal, long-term variability, NCEP reanalysis 2, omega, wind-waves
Procedia PDF Downloads 225443 Experimental Quantification of the Intra-Tow Resin Storage Evolution during RTM Injection
Authors: Mathieu Imbert, Sebastien Comas-Cardona, Emmanuelle Abisset-Chavanne, David Prono
Abstract:
Short cycle time Resin Transfer Molding (RTM) applications appear to be of great interest for the mass production of automotive or aeronautical lightweight structural parts. During the RTM process, the two components of a resin are mixed on-line and injected into the cavity of a mold where a fibrous preform has been placed. Injection and polymerization occur simultaneously in the preform inducing evolutions of temperature, degree of cure and viscosity that furthermore affect flow and curing. In order to adjust the processing conditions to reduce the cycle time, it is, therefore, essential to understand and quantify the physical mechanisms occurring in the part during injection. In a previous study, a dual-scale simulation tool has been developed to help determining the optimum injection parameters. This tool allows tracking finely the repartition of the resin and the evolution of its properties during reactive injections with on-line mixing. Tows and channels of the fibrous material are considered separately to deal with the consequences of the dual-scale morphology of the continuous fiber textiles. The simulation tool reproduces the unsaturated area at the flow front, generated by the tow/channel difference of permeability. Resin “storage” in the tows after saturation is also taken into account as it may significantly affect the repartition and evolution of the temperature, degree of cure and viscosity in the part during reactive injections. The aim of the current study is, thanks to experiments, to understand and quantify the “storage” evolution in the tows to adjust and validate the numerical tool. The presented study is based on four experimental repeats conducted on three different types of textiles: a unidirectional Non Crimp Fabric (NCF), a triaxial NCF and a satin weave. Model fluids, dyes and image analysis, are used to study quantitatively, the resin flow in the saturated area of the samples. Also, textiles characteristics affecting the resin “storage” evolution in the tows are analyzed. Finally, fully coupled on-line mixing reactive injections are conducted to validate the numerical model.Keywords: experimental, on-line mixing, high-speed RTM process, dual-scale flow
Procedia PDF Downloads 165