Search results for: commercialization of technologies
185 Cereal Bioproducts Conversion to Higher Value Feed by Using Pediococcus Strains Isolated from Spontaneous Fermented Cereal, and Its Influence on Milk Production of Dairy Cattle
Authors: Vita Krungleviciute, Rasa Zelvyte, Ingrida Monkeviciene, Jone Kantautaite, Rolandas Stankevicius, Modestas Ruzauskas, Elena Bartkiene
Abstract:
The environmental impact of agricultural bioproducts from the processing of food crops is an increasing concern worldwide. Currently, cereal bran has been used as a low-value ingredient for both human consumption and animal feed. The most popular bioprocessing technologies for cereal bran nutritional and technological functionality increasing are enzymatic processing and fermentation, and the most popular starters in fermented feed production are lactic acid bacteria (LAB) including pediococci. However, the ruminant digestive system is unique, there are billions of microorganisms which help the cow to digest and utilize nutrients in the feed. To achieve efficient feed utilization and high milk yield, the microorganisms must have optimal conditions, and the disbalance of this system is highly undesirable. Pediococcus strains Pediococcus acidilactici BaltBio01 and Pediococcus pentosaceus BaltBio02 from spontaneous fermented rye were isolated (by rep – PCR method), identified, and characterized by their growth (by Thermo Bioscreen C automatic turbidometer), acidification rate (2 hours in 2.5 pH), gas production (Durham method), and carbohydrate metabolism (by API 50 CH test ). Antimicrobial activities of isolated pediococcus against variety of pathogenic and opportunistic bacterial strains previously isolated from diseased cattle, and their resistance to antibiotics were evaluated (EFSA-FEEDAP method). The isolated pediococcus strains were cultivated in barley/wheat bran (90 / 10, m / m) substrate, and developed supplements, with high content of valuable pediococcus, were used for Lithuanian black and white dairy cows feeding. In addition, the influence of supplements on milk production and composition was determined. Milk composition was evaluated by the LactoScope FTIR” FT1.0. 2001 (Delta Instruments, Holland). P. acidilactici BaltBio01 and P. pentosaceus BaltBio02 demonstrated versatile carbohydrate metabolism, grown at 30°C and 37°C temperatures, and acidic tolerance. Isolated pediococcus strains showed to be non resistant to antibiotics, and having antimicrobial activity against undesirable microorganisms. By barley/wheat bran utilisation using fermentation with selected pediococcus strains, it is possible to produce safer (reduced Enterobacteriaceae, total aerobic bacteria, yeast and mold count) feed stock with high content of pediococcus. Significantly higher milk yield (after 33 days) by using pediococcus supplements mix for dairy cows feeding could be obtained, while similar effect by using separate strains after 66 days of feeding could be achieved. It can be stated that barley/wheat bran could be used for higher value feed production in order to increase milk production. Therefore, further research is needed to identify what is the main mechanism of the positive action.Keywords: barley/wheat bran, dairy cattle, fermented feed, milk, pediococcus
Procedia PDF Downloads 307184 Crafting Robust Business Model Innovation Path with Generative Artificial Intelligence in Start-up SMEs
Authors: Ignitia Motjolopane
Abstract:
Small and medium enterprises (SMEs) play an important role in economies by contributing to economic growth and employment. In the fourth industrial revolution, the convergence of technologies and the changing nature of work created pressures on economies globally. Generative artificial intelligence (AI) may support SMEs in exploring, exploiting, and transforming business models to align with their growth aspirations. SMEs' growth aspirations fall into four categories: subsistence, income, growth, and speculative. Subsistence-oriented firms focus on meeting basic financial obligations and show less motivation for business model innovation. SMEs focused on income, growth, and speculation are more likely to pursue business model innovation to support growth strategies. SMEs' strategic goals link to distinct business model innovation paths depending on whether SMEs are starting a new business, pursuing growth, or seeking profitability. Integrating generative artificial intelligence in start-up SME business model innovation enhances value creation, user-oriented innovation, and SMEs' ability to adapt to dynamic changes in the business environment. The existing literature may lack comprehensive frameworks and guidelines for effectively integrating generative AI in start-up reiterative business model innovation paths. This paper examines start-up business model innovation path with generative artificial intelligence. A theoretical approach is used to examine start-up-focused SME reiterative business model innovation path with generative AI. Articulating how generative AI may be used to support SMEs to systematically and cyclically build the business model covering most or all business model components and analyse and test the BM's viability throughout the process. As such, the paper explores generative AI usage in market exploration. Moreover, market exploration poses unique challenges for start-ups compared to established companies due to a lack of extensive customer data, sales history, and market knowledge. Furthermore, the paper examines the use of generative AI in developing and testing viable value propositions and business models. In addition, the paper looks into identifying and selecting partners with generative AI support. Selecting the right partners is crucial for start-ups and may significantly impact success. The paper will examine generative AI usage in choosing the right information technology, funding process, revenue model determination, and stress testing business models. Stress testing business models validate strong and weak points by applying scenarios and evaluating the robustness of individual business model components and the interrelation between components. Thus, the stress testing business model may address these uncertainties, as misalignment between an organisation and its environment has been recognised as the leading cause of company failure. Generative AI may be used to generate business model stress-testing scenarios. The paper is expected to make a theoretical and practical contribution to theory and approaches in crafting a robust business model innovation path with generative artificial intelligence in start-up SMEs.Keywords: business models, innovation, generative AI, small medium enterprises
Procedia PDF Downloads 71183 Ammonia Cracking: Catalysts and Process Configurations for Enhanced Performance
Authors: Frea Van Steenweghen, Lander Hollevoet, Johan A. Martens
Abstract:
Compared to other hydrogen (H₂) carriers, ammonia (NH₃) is one of the most promising carriers as it contains 17.6 wt% hydrogen. It is easily liquefied at ≈ 9–10 bar pressure at ambient temperature. More importantly, NH₃ is a carbon-free hydrogen carrier with no CO₂ emission at final decomposition. Ammonia has a well-defined regulatory framework and a good track record regarding safety concerns. Furthermore, the industry already has an existing transport infrastructure consisting of pipelines, tank trucks and shipping technology, as ammonia has been manufactured and distributed around the world for over a century. While NH₃ synthesis and transportation technological solutions are at hand, a missing link in the hydrogen delivery scheme from ammonia is an energy-lean and efficient technology for cracking ammonia into H₂ and N₂. The most explored option for ammonia decomposition is thermo-catalytic cracking which is, by itself, the most energy-efficient approach compared to other technologies, such as plasma and electrolysis, as it is the most energy-lean and robust option. The decomposition reaction is favoured only at high temperatures (> 300°C) and low pressures (1 bar) as the thermocatalytic ammonia cracking process is faced with thermodynamic limitations. At 350°C, the thermodynamic equilibrium at 1 bar pressure limits the conversion to 99%. Gaining additional conversion up to e.g. 99.9% necessitates heating to ca. 530°C. However, reaching thermodynamic equilibrium is infeasible as a sufficient driving force is needed, requiring even higher temperatures. Limiting the conversion below the equilibrium composition is a more economical option. Thermocatalytic ammonia cracking is documented in scientific literature. Among the investigated metal catalysts (Ru, Co, Ni, Fe, …), ruthenium is known to be most active for ammonia decomposition with an onset of cracking activity around 350°C. For establishing > 99% conversion reaction, temperatures close to 600°C are required. Such high temperatures are likely to reduce the round-trip efficiency but also the catalyst lifetime because of the sintering of the supported metal phase. In this research, the first focus was on catalyst bed design, avoiding diffusion limitation. Experiments in our packed bed tubular reactor set-up showed that extragranular diffusion limitations occur at low concentrations of NH₃ when reaching high conversion, a phenomenon often overlooked in experimental work. A second focus was thermocatalyst development for ammonia cracking, avoiding the use of noble metals. To this aim, candidate metals and mixtures were deposited on a range of supports. Sintering resistance at high temperatures and the basicity of the support were found to be crucial catalyst properties. The catalytic activity was promoted by adding alkaline and alkaline earth metals. A third focus was studying the optimum process configuration by process simulations. A trade-off between conversion and favorable operational conditions (i.e. low pressure and high temperature) may lead to different process configurations, each with its own pros and cons. For example, high-pressure cracking would eliminate the need for post-compression but is detrimental for the thermodynamic equilibrium, leading to an optimum in cracking pressure in terms of energy cost.Keywords: ammonia cracking, catalyst research, kinetics, process simulation, thermodynamic equilibrium
Procedia PDF Downloads 66182 The Future Control Rooms for Sustainable Power Systems: Current Landscape and Operational Challenges
Authors: Signe Svensson, Remy Rey, Anna-Lisa Osvalder, Henrik Artman, Lars Nordström
Abstract:
The electric power system is undergoing significant changes. Thereby, the operation and control are becoming partly modified, more multifaceted and automated, and thereby supplementary operator skills might be required. This paper discusses developing operational challenges in future power system control rooms, posed by the evolving landscape of sustainable power systems, driven in turn by the shift towards electrification and renewable energy sources. A literature review followed by interviews and a comparison to other related domains with similar characteristics, a descriptive analysis was performed from a human factors perspective. Analysis is meant to identify trends, relationships, and challenges. A power control domain taxonomy includes a temporal domain (planning and real-time operation) and three operational domains within the power system (generation, switching and balancing). Within each operational domain, there are different control actions, either in the planning stage or in the real-time operation, that affect the overall operation of the power system. In addition to the temporal dimension, the control domains are divided in space between a multitude of different actors distributed across many different locations. A control room is a central location where different types of information are monitored and controlled, alarms are responded to, and deviations are handled by the control room operators. The operators’ competencies, teamwork skills, team shift patterns as well as control system designs are all important factors in ensuring efficient and safe electricity grid management. As the power system evolves with sustainable energy technologies, challenges are found. Questions are raised regarding whether the operators’ tacit knowledge, experience and operation skills of today are sufficient to make constructive decisions to solve modified and new control tasks, especially during disturbed operations or abnormalities. Which new skills need to be developed in planning and real-time operation to provide efficient generation and delivery of energy through the system? How should the user interfaces be developed to assist operators in processing the increasing amount of information? Are some skills at risk of being lost when the systems change? How should the physical environment and collaborations between different stakeholders within and outside the control room develop to support operator control? To conclude, the system change will provide many benefits related to electrification and renewable energy sources, but it is important to address the operators’ challenges with increasing complexity. The control tasks will be modified, and additional operator skills are needed to perform efficient and safe operations. Also, the whole human-technology-organization system needs to be considered, including the physical environment, the technical aids and the information systems, the operators’ physical and mental well-being, as well as the social and organizational systems.Keywords: operator, process control, energy system, sustainability, future control room, skill
Procedia PDF Downloads 96181 Characterization of Surface Microstructures on Bio-Based PLA Fabricated with Nano-Imprint Lithography
Authors: D. Bikiaris, M. Nerantzaki, I. Koliakou, A. Francone, N. Kehagias
Abstract:
In the present study, the formation of structures in poly(lactic acid) (PLA) has been investigated with respect to producing areas of regular, superficial features with dimensions comparable to those of cells or biological macromolecules. Nanoimprint lithography, a method of pattern replication in polymers, has been used for the production of features ranging from tens of micrometers, covering areas up to 1 cm², down to hundreds of nanometers. Both micro- and nano-structures were faithfully replicated. Potentially, PLA has wide uses within biomedical fields, from implantable medical devices, including screws and pins, to membrane applications, such as wound covers, and even as an injectable polymer for, for example, lipoatrophy. The possibility of fabricating structured PLA surfaces, with structures of the dimensions associated with cells or biological macro- molecules, is of interest in fields such as cellular engineering. Imprint-based technologies have demonstrated the ability to selectively imprint polymer films over large areas resulting in 3D imprints over flat, curved or pre-patterned surfaces. Here, we compare nano-patterned with nano-patterned by nanoimprint lithography (NIL) PLA film. A silicon nanostructured stamp (provided by Nanotypos company) having positive and negative protrusions was used to pattern PLA films by means of thermal NIL. The polymer film was heated from 40°C to 60°C above its Tg and embossed with a pressure of 60 bars for 3 min. The stamp and substrate were demolded at room temperature. Scanning electron microscope (SEM) images showed good replication fidelity of the replicated Si stamp. Contact-angle measurements suggested that positive microstructuring of the polymer (where features protrude from the polymer surface) produced a more hydrophilic surface than negative micro-structuring. The ability to structure the surface of the poly(lactic acid), allied to the polymer’s post-processing transparency and proven biocompatibility. Films produced in this were also shown to enhance the aligned attachment behavior and proliferation of Wharton’s Jelly Mesenchymal Stem cells, leading to the observed growth contact guidance. The bacterial attachment patterns of some bacteria, highlighted that the nano-patterned PLA structure can reduce the propensity for the bacteria to attach to the surface, with a greater bactericidal being demonstrated activity against the Staphylococcus aureus cells. These biocompatible, micro- and nanopatterned PLA surfaces could be useful for polymer– cell interaction experiments at dimensions at, or below, that of individual cells. Indeed, post-fabrication modification of the microstructured PLA surface, with materials such as collagen (which can further reduce the hydrophobicity of the surface), will extend the range of applications, possibly through the use of PLA’s inherent biodegradability. Further study is being undertaken to examine whether these structures promote cell growth on the polymer surface.Keywords: poly(lactic acid), nano-imprint lithography, anti-bacterial properties, PLA
Procedia PDF Downloads 330180 Resolving a Piping Vibration Problem by Installing Viscous Damper Supports
Authors: Carlos Herrera Sierralta, Husain M. Muslim, Meshal T. Alsaiari, Daniel Fischer
Abstract:
Preventing piping fatigue flow induced vibration in the Oil & Gas sector demands not only the constant development of engineering design methodologies based on available software packages, but also special piping support technologies for designing safe and reliable piping systems. The vast majority of piping vibration problems in the Oil & Gas industry are provoked by the process flow characteristics which are basically intrinsically related to the fluid properties, the type of service and its different operational scenarios. In general, the corrective actions recommended for flow induced vibration in piping systems can be grouped in two major areas: those which affect the excitation mechanisms typically associated to process variables, and those which affect the response mechanism of the pipework per se, and the pipework associated steel support structure. Where possible the first option is to try to solve the flow induced problem from the excitation mechanism perspective. However, in producing facilities the approach of changing process parameters might not always be convenient as it could lead to reduction of production rates or it may require the shutdown of the system in order to perform the required piping modification. That impediment might lead to a second option, which is to modify the response of the piping system to excitation generated by the type of process flow. In principle, the action of shifting the natural frequency of the system well above the frequency inherent to the process always favours the elimination, or considerably reduces, the level of vibration experienced by the piping system. Tightening up the clearances at the supports (ideally zero gap), and adding new static supports at the system, are typical ways of increasing the natural frequency of the piping system. However, only stiffening the piping system may not be sufficient to resolve the vibration problem, and in some cases, it might not be feasible to implement it at all, as the available piping layout could create limitations on adding supports due to thermal expansion/contraction requirements. In these cases, utilization of viscous damper supports could be recommended as these devices can allow relatively large quasi-static movement of piping while providing sufficient capabilities of dissipating the vibration. Therefore, when correctly selected and installed, viscous damper supports can provide a significant effect on the response of the piping system over a wide range of frequencies. Viscous dampers cannot be used to support sustained, static loads. This paper shows over a real case example, a methodology which allows to determine the selection of the viscous damper supports via a dynamic analysis model. By implementing this methodology, it was possible to resolve the piping vibration problem throughout redesigning adequately the existing static piping supports and by adding new viscous dampers supports. This was conducted on-stream at the oil crude pipeline in question without the necessity of reducing the production of the plant. Concluding that the application of the methodology of this paper can be applied to solve similar cases in a straightforward manner.Keywords: dynamic analysis, flow induced vibration, piping supports, turbulent flow, slug flow, viscous damper
Procedia PDF Downloads 143179 CO2 Utilization by Reverse Water-Shift and Fischer-Tropsch Synthesis for Production of Heavier Fraction Hydrocarbons in a Container-Sized Mobile Unit
Authors: Francisco Vidal Vázquez, Pekka Simell, Christian Frilund, Matti Reinikainen, Ilkka Hiltunen, Tim Böltken, Benjamin Andris, Paolo Piermartini
Abstract:
Carbon capture and utilization (CCU) are one of the key topics in mitigation of CO2 emissions. There are many different technologies that are applied for the production of diverse chemicals from CO2 such as synthetic natural gas, Fischer-Tropsch products, methanol and polymers. Power-to-Gas and Power-to-Liquids concepts arise as a synergetic solution for storing energy and producing value added products from the intermittent renewable energy sources and CCU. VTT is a research and technology development company having energy in transition as one of the key focus areas. VTT has extensive experience in piloting and upscaling of new energy and chemical processes. Recently, VTT has developed and commissioned a Mobile Synthesis Unit (MOBSU) in close collaboration with INERATEC, a spin-off company of Karlsruhe Institute of Technology (KIT, Germany). The MOBSU is a multipurpose synthesis unit for CO2 upgrading to energy carriers and chemicals, which can be transported on-site where CO2 emission and renewable energy are available. The MOBSU is initially used for production of fuel compounds and chemical intermediates by combination of two consecutive processes: reverse Water-Gas Shift (rWGS) and Fischer-Tropsch synthesis (FT). First, CO2 is converted to CO by high-pressure rWGS and then, the CO and H2 rich effluent is used as feed for FT using an intensified reactor technology developed and designed by INERATEC. Chemical equilibrium of rWGS reaction is not affected by pressure. Nevertheless, compression would be required in between rWGS and FT in the case when rWGS is operated at atmospheric pressure. This would also require cooling of rWGS effluent, water removal and reheating. For that reason, rWGS is operated using precious metal catalyst in the MOBSU at similar pressure as FT to simplify the process. However, operating rWGS at high pressures has also some disadvantages such as methane and carbon formation, and more demanding specifications for materials. The main parts of FT module are an intensified reactor, a hot trap to condense the FT wax products, and a cold trap to condense the FT liquid products. The FT synthesis is performed using cobalt catalyst in a novel compact reactor technology with integrated highly-efficient water evaporation cooling cycle. The MOBSU started operation in November 2016. First, the FT module is tested using as feedstock H2 and CO. Subsequently, rWGS and FT modules are operated together using CO2 and H2 as feedstock of ca. 5 Nm3/hr total flowrate. On spring 2017, The MOBSU unit will be integrated together with a direct air capture (DAC) of CO2 unit, and a PEM electrolyser unit at Lappeenranta University of Technology (LUT) premises for demonstration of the SoletAir concept. This would be the first time when synthetic fuels are produced by combination of DAC unit and electrolyser unit which uses solar power for H2 production.Keywords: CO2 utilization, demonstration, Fischer-Tropsch synthesis, intensified reactors, reverse water-gas shift
Procedia PDF Downloads 290178 Development of Alternative Fuels Technologies for Transportation
Authors: Szymon Kuczynski, Krystian Liszka, Mariusz Laciak, Andrii Oliinyk, Adam Szurlej
Abstract:
Currently, in automotive transport to power vehicles, almost exclusively hydrocarbon based fuels are used. Due to increase of hydrocarbon fuels consumption, quality parameters are tightend for clean environment. At the same time efforts are undertaken for development of alternative fuels. The reasons why looking for alternative fuels for petroleum and diesel are: to increase vehicle efficiency and to reduce the environmental impact, reduction of greenhouse gases emissions and savings in consumption of limited oil resources. Significant progress was performed on development of alternative fuels such as methanol, ethanol, natural gas (CNG / LNG), LPG, dimethyl ether (DME) and biodiesel. In addition, biggest vehicle manufacturers work on fuel cell vehicles and its introduction to the market. Alcohols such as methanol and ethanol create the perfect fuel for spark-ignition engines. Their advantages are high-value antiknock which determines their application as additive (10%) to unleaded petrol and relative purity of produced exhaust gasses. Ethanol is produced in distillation process of plant products, which value as a food can be irrational. Ethanol production can be costly also for the entire economy of the country, because it requires a large complex distillation plants, large amounts of biomass and finally a significant amount of fuel to sustain the process. At the same time, the fermentation process of plants releases into the atmosphere large quantities of carbon dioxide. Natural gas cannot be directly converted into liquid fuels, although such arrangements have been proposed in the literature. Going through stage of intermediates is inevitable yet. Most popular one is conversion to methanol, which can be processed further to dimethyl ether (DME) or olefin (ethylene and propylene) for the petrochemical sector. Methanol uses natural gas as a raw material, however, requires expensive and advanced production processes. In relation to pollution emissions, the optimal vehicle fuel is LPG which is used in many countries as an engine fuel. Production of LPG is inextricably linked with production and processing of oil and gas, and which represents a small percentage. Its potential as an alternative for traditional fuels is therefore proportionately reduced. Excellent engine fuel may be biogas, however, follows to the same limitations as ethanol - the same production process is used and raw materials. Most essential fuel in the campaign of environment protection against pollution is natural gas. Natural gas as fuel may be either compressed (CNG) or liquefied (LNG). Natural gas can also be used for hydrogen production in steam reforming. Hydrogen can be used as a basic starting material for the chemical industry, an important raw material in the refinery processes, as well as a fuel vehicle transportation. Natural gas can be used as CNG which represents an excellent compromise between the availability of the technology that is proven and relatively cheap to use in many areas of the automotive industry. Natural gas can also be seen as an important bridge to other alternative sources of energy derived from fuel and harmless to the environment. For these reasons CNG as a fuel stimulates considerable interest in the worldwide.Keywords: alternative fuels, CNG (Compressed Natural Gas), LNG (Liquefied Natural Gas), NGVs (Natural Gas Vehicles)
Procedia PDF Downloads 181177 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays
Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín
Abstract:
Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation
Procedia PDF Downloads 195176 Superparamagnetic Sensor with Lateral Flow Immunoassays as Platforms for Biomarker Quantification
Authors: M. Salvador, J. C. Martinez-Garcia, A. Moyano, M. C. Blanco-Lopez, M. Rivas
Abstract:
Biosensors play a crucial role in the detection of molecules nowadays due to their advantages of user-friendliness, high selectivity, the analysis in real time and in-situ applications. Among them, Lateral Flow Immunoassays (LFIAs) are presented among technologies for point-of-care bioassays with outstanding characteristics such as affordability, portability and low-cost. They have been widely used for the detection of a vast range of biomarkers, which do not only include proteins but also nucleic acids and even whole cells. Although the LFIA has traditionally been a positive/negative test, tremendous efforts are being done to add to the method the quantifying capability based on the combination of suitable labels and a proper sensor. One of the most successful approaches involves the use of magnetic sensors for detection of magnetic labels. Bringing together the required characteristics mentioned before, our research group has developed a biosensor to detect biomolecules. Superparamagnetic nanoparticles (SPNPs) together with LFIAs play the fundamental roles. SPMNPs are detected by their interaction with a high-frequency current flowing on a printed micro track. By means of the instant and proportional variation of the impedance of this track provoked by the presence of the SPNPs, quantitative and rapid measurement of the number of particles can be obtained. This way of detection requires no external magnetic field application, which reduces the device complexity. On the other hand, the major limitations of LFIAs are that they are only qualitative or semiquantitative when traditional gold or latex nanoparticles are used as color labels. Moreover, the necessity of always-constant ambient conditions to get reproducible results, the exclusive detection of the nanoparticles on the surface of the membrane, and the short durability of the signal are drawbacks that can be advantageously overcome with the design of magnetically labeled LFIAs. The approach followed was to coat the SPIONs with a specific monoclonal antibody which targets the protein under consideration by chemical bonds. Then, a sandwich-type immunoassay was prepared by printing onto the nitrocellulose membrane strip a second antibody against a different epitope of the protein (test line) and an IgG antibody (control line). When the sample flows along the strip, the SPION-labeled proteins are immobilized at the test line, which provides magnetic signal as described before. Preliminary results using this practical combination for the detection and quantification of the Prostatic-Specific Antigen (PSA) shows the validity and consistency of the technique in the clinical range, where a PSA level of 4.0 ng/mL is the established upper normal limit. Moreover, a LOD of 0.25 ng/mL was calculated with a confident level of 3 according to the IUPAC Gold Book definition. Its versatility has also been proved with the detection of other biomolecules such as troponin I (cardiac injury biomarker) or histamine.Keywords: biosensor, lateral flow immunoassays, point-of-care devices, superparamagnetic nanoparticles
Procedia PDF Downloads 232175 Valuing Cultural Ecosystem Services of Natural Treatment Systems Using Crowdsourced Data
Authors: Andrea Ghermandi
Abstract:
Natural treatment systems such as constructed wetlands and waste stabilization ponds are increasingly used to treat water and wastewater from a variety of sources, including stormwater and polluted surface water. The provision of ancillary benefits in the form of cultural ecosystem services makes these systems unique among water and wastewater treatment technologies and greatly contributes to determine their potential role in promoting sustainable water management practices. A quantitative analysis of these benefits, however, has been lacking in the literature. Here, a critical assessment of the recreational and educational benefits in natural treatment systems is provided, which combines observed public use from a survey of managers and operators with estimated public use as obtained using geotagged photos from social media as a proxy for visitation rates. Geographic Information Systems (GIS) are used to characterize the spatial boundaries of 273 natural treatment systems worldwide. Such boundaries are used as input for the Application Program Interfaces (APIs) of two popular photo-sharing websites (Flickr and Panoramio) in order to derive the number of photo-user-days, i.e., the number of yearly visits by individual photo users in each site. The adequateness and predictive power of four univariate calibration models using the crowdsourced data as a proxy for visitation are evaluated. A high correlation is found between photo-user-days and observed annual visitors (Pearson's r = 0.811; p-value < 0.001; N = 62). Standardized Major Axis (SMA) regression is found to outperform Ordinary Least Squares regression and count data models in terms of predictive power insofar as standard verification statistics – such as the root mean square error of prediction (RMSEP), the mean absolute error of prediction (MAEP), the reduction of error (RE), and the coefficient of efficiency (CE) – are concerned. The SMA regression model is used to estimate the intensity of public use in all 273 natural treatment systems. System type, influent water quality, and area are found to statistically affect public use, consistently with a priori expectations. Publicly available information regarding the home location of the sampled visitors is derived from their social media profiles and used to infer the distance they are willing to travel to visit the natural treatment systems in the database. Such information is analyzed using the travel cost method to derive monetary estimates of the recreational benefits of the investigated natural treatment systems. Overall, the findings confirm the opportunities arising from an integrated design and management of natural treatment systems, which combines the objectives of water quality enhancement and provision of cultural ecosystem services through public use in a multi-functional approach and compatibly with the need to protect public health.Keywords: constructed wetlands, cultural ecosystem services, ecological engineering, waste stabilization ponds
Procedia PDF Downloads 180174 Solar Power Forecasting for the Bidding Zones of the Italian Electricity Market with an Analog Ensemble Approach
Authors: Elena Collino, Dario A. Ronzio, Goffredo Decimi, Maurizio Riva
Abstract:
The rapid increase of renewable energy in Italy is led by wind and solar installations. The 2017 Italian energy strategy foresees a further development of these sustainable technologies, especially solar. This fact has resulted in new opportunities, challenges, and different problems to deal with. The growth of renewables allows to meet the European requirements regarding energy and environmental policy, but these types of sources are difficult to manage because they are intermittent and non-programmable. Operationally, these characteristics can lead to instability on the voltage profile and increasing uncertainty on energy reserve scheduling. The increasing renewable production must be considered with more and more attention especially by the Transmission System Operator (TSO). The TSO, in fact, every day provides orders on energy dispatch, once the market outcome has been determined, on extended areas, defined mainly on the basis of power transmission limitations. In Italy, six market zone are defined: Northern-Italy, Central-Northern Italy, Central-Southern Italy, Southern Italy, Sardinia, and Sicily. An accurate hourly renewable power forecasting for the day-ahead on these extended areas brings an improvement both in terms of dispatching and reserve management. In this study, an operational forecasting tool of the hourly solar output for the six Italian market zones is presented, and the performance is analysed. The implementation is carried out by means of a numerical weather prediction model, coupled with a statistical post-processing in order to derive the power forecast on the basis of the meteorological projection. The weather forecast is obtained from the limited area model RAMS on the Italian territory, initialized with IFS-ECMWF boundary conditions. The post-processing calculates the solar power production with the Analog Ensemble technique (AN). This statistical approach forecasts the production using a probability distribution of the measured production registered in the past when the weather scenario looked very similar to the forecasted one. The similarity is evaluated for the components of the solar radiation: global (GHI), diffuse (DIF) and direct normal (DNI) irradiation, together with the corresponding azimuth and zenith solar angles. These are, in fact, the main factors that affect the solar production. Considering that the AN performance is strictly related to the length and quality of the historical data a training period of more than one year has been used. The training set is made by historical Numerical Weather Prediction (NWP) forecasts at 12 UTC for the GHI, DIF and DNI variables over the Italian territory together with corresponding hourly measured production for each of the six zones. The AN technique makes it possible to estimate the aggregate solar production in the area, without information about the technologic characteristics of the all solar parks present in each area. Besides, this information is often only partially available. Every day, the hourly solar power forecast for the six Italian market zones is made publicly available through a website.Keywords: analog ensemble, electricity market, PV forecast, solar energy
Procedia PDF Downloads 158173 The Digital Transformation of Life Insurance Sales in Iran With the Emergence of Personal Financial Planning Robots; Opportunities and Challenges
Authors: Pedram Saadati, Zahra Nazari
Abstract:
Anticipating and identifying future opportunities and challenges facing industry activists for the emergence and entry of new knowledge and technologies of personal financial planning, and providing practical solutions is one of the goals of this research. For this purpose, a future research tool based on receiving opinions from the main players of the insurance industry has been used. The research method in this study was in 4 stages; including 1- a survey of the specialist salesforce of life insurance in order to identify the variables 2- the ranking of the variables by experts selected by a researcher-made questionnaire 3- holding a panel of experts with the aim of understanding the mutual effects of the variables and 4- statistical analyzes of the mutual effects matrix in Mick Mac software is done. The integrated analysis of influencing variables in the future has been done with the method of Structural Analysis, which is one of the efficient and innovative methods of future research. A list of opportunities and challenges was identified through a survey of best-selling life insurance representatives who were selected by snowball sampling. In order to prioritize and identify the most important issues, all the issues raised were sent to selected experts who were selected theoretically through a researcher-made questionnaire. The respondents determined the importance of 36 variables through scoring, so that the prioritization of opportunity and challenge variables can be determined. 8 of the variables identified in the first stage were removed by selected experts, and finally, the number of variables that could be examined in the third stage became 28 variables, which, in order to facilitate the examination, were divided into 6 categories, respectively, 11 variables of organization and management. Marketing and sales 7 cases, social and cultural 6 cases, technological 2 cases, rebranding 1 case and insurance 1 case were divided. The reliability of the researcher-made questionnaire was confirmed with the Cronbach's alpha test value of 0.96. In the third stage, by forming a panel consisting of 5 insurance industry experts, the consensus of their opinions about the influence of factors on each other and the ranking of variables was entered into the matrix. The matrix included the interrelationships of 28 variables, which were investigated using the structural analysis method. By analyzing the data obtained from the matrix by Mic Mac software, the findings of the research indicate that the categories of "correct training in the use of the software, the weakness of the technology of insurance companies in personalizing products, using the approach of equipping the customer, and honesty in declaring no need Customer to Insurance", the most important challenges of the influencer and the categories of "salesforce equipping approach, product personalization based on customer needs assessment, customer's pleasant experience of being consulted with consulting robots, business improvement of the insurance company due to the use of these tools, increasing the efficiency of the issuance process and optimal customer purchase" were identified as the most important opportunities for influence.Keywords: personal financial planning, wealth management, advisor robots, life insurance, digital transformation
Procedia PDF Downloads 46172 Study of a Decentralized Electricity Market on Awaji Island
Authors: Arkadiusz P. Wójcik, Tetsuya Sato, Shin-Ichiro Shima, Mateusz Malanowski
Abstract:
Over the last decades, new technologies have significantly changed the way information is transmitted and stored. Renewable energy sources have become prevalent and affordable. Cooperation of the Information and Communication Technology industry and Renewable Energy industry makes it possible to create a next generation, decentralized power grid. In this context, the study seeks to identify the wider benefits to the local Japanese economy as a result of the development of a decentralised electricity market. Our general approach aims to integrate an economic analysis (monetary appraisal of costs and benefits to society) with externalities that are not quantifiable in monetary terms (e.g. social impact, environmental impact). The study also highlights opportunities and sets out recommendations for the citizens of the island and the local government. The simulation is the scientific basis for economic impact analysis. Various types of sources of energy have been taken into account: residential wind farm, residential wind turbine, solar farm, residential solar panels and private solar farms. Analysis of local geographic and economic conditions allowed creating a customized business model. Very often farmers on Awaji Island are using crop cycle. During each cycle, one part of the field is resting and replenishing nutrients. In the next year another part of the field is resting. Portable solar panels could be freely set up in this part of the field. At the end of the crop cycle, portable solar panels would be moved to the next resting part. Because of spacious area, for a single household 500 square meters of portable solar panels has been proposed and simulated. The devised simulation shows that the Rate of Return on Investment for solar panels, which are on the island, could reach up to 37.21%. Supposing that about 20% of households install solar panels they could produce 49.11% of the electric energy consumed by households on the island. The analysis shows that rest of the energy supply can be produced by currently existing one huge solar farm and two wind farms to meet 97.59% of demand on electricity for households on the island. Although there are more than 7,000 agricultural fields on the island, young people tend to avoid agricultural work and prefer to move from the island to big cities, live there in little mansions and work until late night. The business model proposed in this study could increase farmer’s monthly income by ¥200,000 - ¥300,000 (1,600 euro – 2,400 euro). Young people could work less and have a higher standard of living than in a city. Creation of a decentralized electricity market can unlock significant benefits in other industries (e.g. electric vehicles), providing a welcome boost to economic growth, jobs and quality of life.Keywords: digital twin, Matlab, model-based systems engineering, simulink, smart grid, systems engineering
Procedia PDF Downloads 121171 Development and Evaluation of Economical Self-cleaning Cement
Authors: Anil Saini, Jatinder Kumar Ratan
Abstract:
Now a day, the key issue for the scientific community is to devise the innovative technologies for sustainable control of urban pollution. In urban cities, a large surface area of the masonry structures, buildings, and pavements is exposed to the open environment, which may be utilized for the control of air pollution, if it is built from the photocatalytically active cement-based constructional materials such as concrete, mortars, paints, and blocks, etc. The photocatalytically active cement is formulated by incorporating a photocatalyst in the cement matrix, and such cement is generally known as self-cleaning cement In the literature, self-cleaning cement has been synthesized by incorporating nanosized-TiO₂ (n-TiO₂) as a photocatalyst in the formulation of the cement. However, the utilization of n-TiO₂ for the formulation of self-cleaning cement has the drawbacks of nano-toxicity, higher cost, and agglomeration as far as the commercial production and applications are concerned. The use of microsized-TiO₂ (m-TiO₂) in place of n-TiO₂ for the commercial manufacture of self-cleaning cement could avoid the above-mentioned problems. However, m-TiO₂ is less photocatalytically active as compared to n- TiO₂ due to smaller surface area, higher band gap, and increased recombination rate. As such, the use of m-TiO₂ in the formulation of self-cleaning cement may lead to a reduction in photocatalytic activity, thus, reducing the self-cleaning, depolluting, and antimicrobial abilities of the resultant cement material. So improvement in the photoactivity of m-TiO₂ based self-cleaning cement is the key issue for its practical applications in the present scenario. The current work proposes the use of surface-fluorinated m-TiO₂ for the formulation of self-cleaning cement to enhance its photocatalytic activity. The calcined dolomite, a constructional material, has also been utilized as co-adsorbent along with the surface-fluorinated m-TiO₂ in the formulation of self-cleaning cement to enhance the photocatalytic performance. The surface-fluorinated m-TiO₂, calcined dolomite, and the formulated self-cleaning cement were characterized using diffuse reflectance spectroscopy (DRS), X-ray diffraction analysis (XRD), field emission-scanning electron microscopy (FE-SEM), energy dispersive x-ray spectroscopy (EDS), X-ray photoelectron spectroscopy (XPS), scanning electron microscopy (SEM), BET (Brunauer–Emmett–Teller) surface area, and energy dispersive X-ray fluorescence spectrometry (EDXRF). The self-cleaning property of the as-prepared self-cleaning cement was evaluated using the methylene blue (MB) test. The depolluting ability of the formulated self-cleaning cement was assessed through a continuous NOX removal test. The antimicrobial activity of the self-cleaning cement was appraised using the method of the zone of inhibition. The as-prepared self-cleaning cement obtained by uniform mixing of 87% clinker, 10% calcined dolomite, and 3% surface-fluorinated m-TiO₂ showed a remarkable self-cleaning property by providing 53.9% degradation of the coated MB dye. The self-cleaning cement also depicted a noteworthy depolluting ability by removing 5.5% of NOx from the air. The inactivation of B. subtiltis bacteria in the presence of light confirmed the significant antimicrobial property of the formulated self-cleaning cement. The self-cleaning, depolluting, and antimicrobial results are attributed to the synergetic effect of surface-fluorinated m-TiO₂ and calcined dolomite in the cement matrix. The present study opens an idea and route for further research for acile and economical formulation of self-cleaning cement.Keywords: microsized-titanium dioxide (m-TiO₂), self-cleaning cement, photocatalysis, surface-fluorination
Procedia PDF Downloads 170170 Reliability and Validity of a Portable Inertial Sensor and Pressure Mat System for Measuring Dynamic Balance Parameters during Stepping
Authors: Emily Rowe
Abstract:
Introduction: Balance assessments can be used to help evaluate a person’s risk of falls, determine causes of balance deficits and inform intervention decisions. It is widely accepted that instrumented quantitative analysis can be more reliable and specific than semi-qualitative ordinal scales or itemised scoring methods. However, the uptake of quantitative methods is hindered by expense, lack of portability, and set-up requirements. During stepping, foot placement is actively coordinated with the body centre of mass (COM) kinematics during pre-initiation. Based on this, the potential to use COM velocity just prior to foot off and foot placement error as an outcome measure of dynamic balance is currently being explored using complex 3D motion capture. Inertial sensors and pressure mats might be more practical technologies for measuring these parameters in clinical settings. Objective: The aim of this study was to test the criterion validity and test-retest reliability of a synchronised inertial sensor and pressure mat-based approach to measure foot placement error and COM velocity while stepping. Methods: Trials were held with 15 healthy participants who each attended for two sessions. The trial task was to step onto one of 4 targets (2 for each foot) multiple times in a random, unpredictable order. The stepping target was cued using an auditory prompt and electroluminescent panel illumination. Data was collected using 3D motion capture and a combined inertial sensor-pressure mat system simultaneously in both sessions. To assess the reliability of each system, ICC estimates and their 95% confident intervals were calculated based on a mean-rating (k = 2), absolute-agreement, 2-way mixed-effects model. To test the criterion validity of the combined inertial sensor-pressure mat system against the motion capture system multi-factorial two-way repeated measures ANOVAs were carried out. Results: It was found that foot placement error was not reliably measured between sessions by either system (ICC 95% CIs; motion capture: 0 to >0.87 and pressure mat: <0.53 to >0.90). This could be due to genuine within-subject variability given the nature of the stepping task and brings into question the suitability of average foot placement error as an outcome measure. Additionally, results suggest the pressure mat is not a valid measure of this parameter since it was statistically significantly different from and much less precise than the motion capture system (p=0.003). The inertial sensor was found to be a moderately reliable (ICC 95% CIs >0.46 to >0.95) but not valid measure for anteroposterior and mediolateral COM velocities (AP velocity: p=0.000, ML velocity target 1 to 4: p=0.734, 0.001, 0.000 & 0.376). However, it is thought that with further development, the COM velocity measure validity could be improved. Possible options which could be investigated include whether there is an effect of inertial sensor placement with respect to pelvic marker placement or implementing more complex methods of data processing to manage inherent accelerometer and gyroscope limitations. Conclusion: The pressure mat is not a suitable alternative for measuring foot placement errors. The inertial sensors have the potential for measuring COM velocity; however, further development work is needed.Keywords: dynamic balance, inertial sensors, portable, pressure mat, reliability, stepping, validity, wearables
Procedia PDF Downloads 153169 A Technology of Hot Stamping and Welding of Carbon Reinforced Plastic Sheets Using High Electric Resistance
Authors: Tomofumi Kubota, Mitsuhiro Okayasu
Abstract:
In recent years, environmental problems and energy problems typified by global warming are intensifying, and transportation devices are required to reduce the weight of structural materials from the viewpoint of strengthening fuel efficiency regulations and energy saving. Carbon fiber reinforced plastic (CFRP) used in this research is attracting attention as a structural material to replace metallic materials. Among them, thermoplastic CFRP is expected to expand its application range in terms of recyclability and cost. High formability and weldability of the unidirectional CFRP sheets conducted by a proposed hot stamping process were proposed, in which the carbon fiber reinforced plastic sheets are heated by a designed technique. In this case, the CFRP sheets are heated by the high electric voltage applied through carbon fibers. In addition, the electric voltage was controlled by the area ratio of exposed carbon fiber on the sample surfaces. The lower exposed carbon fiber on the sample surface makes high electric resistance leading to the high sample temperature. In this case, the CFRP sheets can be heated to more than 150 °C. With the sample heating, the stamping and welding technologies can be carried out. By changing the sample temperature, the suitable stamping condition can be detected. Moreover, the proper welding connection of the CFRP sheets was proposed. In this study, we propose a fusion bonding technique using thermoplasticity, high current flow, and heating caused by electrical resistance. This technology uses the principle of resistance spot welding. In particular, the relationship between the carbon fiber exposure rate and the electrical resistance value that affect the bonding strength is investigated. In this approach, the mechanical connection using rivet is also conducted to make a comparison of the severity of welding. The change of connecting strength is reflected by the fracture mechanism. The low and high connecting strength are obtained for the separation of two CFRP sheets and fractured inside the CFRP sheet, respectively. In addition to the two fracture modes, micro-cracks in CFRP are also detected. This approach also includes mechanical connections using rivets to compare the severity of the welds. The change in bond strength is reflected by the destruction mechanism. Low and high bond strengths were obtained to separate the two CFRP sheets, each broken inside the CFRP sheets. In addition to the two failure modes, micro cracks in CFRP are also detected. In this research, from the relationship between the surface carbon fiber ratio and the electrical resistance value, it was found that different carbon fiber ratios had similar electrical resistance values. Therefore, we investigated which of carbon fiber and resin is more influential to bonding strength. As a result, the lower the carbon fiber ratio, the higher the bonding strength. And this is 50% better than the conventional average strength. This can be evaluated by observing whether the fracture mode is interface fracture or internal fracture.Keywords: CFRP, hot stamping, weliding, deforamtion, mechanical property
Procedia PDF Downloads 125168 Carbon Footprint Assessment and Application in Urban Planning and Geography
Authors: Hyunjoo Park, Taehyun Kim, Taehyun Kim
Abstract:
Human life, activity, and culture depend on the wider environment. Cities offer economic opportunities for goods and services, but cannot exist in environments without food, energy, and water supply. Technological innovation in energy supply and transport speeds up the expansion of urban areas and the physical separation from agricultural land. As a result, division of urban agricultural areas causes more energy demand for food and goods transport between the regions. As the energy resources are leaking all over the world, the impact on the environment crossing the boundaries of cities is also growing. While advances in energy and other technologies can reduce the environmental impact of consumption, there is still a gap between energy supply and demand by current technology, even in technically advanced countries. Therefore, reducing energy demand is more realistic than relying solely on the development of technology for sustainable development. The purpose of this study is to introduce the application of carbon footprint assessment in fields of urban planning and geography. In urban studies, carbon footprint has been assessed at different geographical scales, such as nation, city, region, household, and individual. Carbon footprint assessment for a nation and a city is available by using national or city level statistics of energy consumption categories. By means of carbon footprint calculation, it is possible to compare the ecological capacity and deficit among nations and cities. Carbon footprint also offers great insight on the geographical distribution of carbon intensity at a regional level in the agricultural field. The study shows the background of carbon footprint applications in urban planning and geography by case studies such as figuring out sustainable land-use measures in urban planning and geography. For micro level, footprint quiz or survey can be adapted to measure household and individual carbon footprint. For example, first case study collected carbon footprint data from the survey measuring home energy use and travel behavior of 2,064 households in eight cities in Gyeonggi-do, Korea. Second case study analyzed the effects of the net and gross population densities on carbon footprint of residents at an intra-urban scale in the capital city of Seoul, Korea. In this study, the individual carbon footprint of residents was calculated by converting the carbon intensities of home and travel fossil fuel use of respondents to the unit of metric ton of carbon dioxide (tCO₂) by multiplying the conversion factors equivalent to the carbon intensities of each energy source, such as electricity, natural gas, and gasoline. Carbon footprint is an important concept not only for reducing climate change but also for sustainable development. As seen in case studies carbon footprint may be measured and applied in various spatial units, including but not limited to countries and regions. These examples may provide new perspectives on carbon footprint application in planning and geography. In addition, additional concerns for consumption of food, goods, and services can be included in carbon footprint calculation in the area of urban planning and geography.Keywords: carbon footprint, case study, geography, urban planning
Procedia PDF Downloads 289167 Globalization of Pesticide Technology and Sustainable Agriculture
Authors: Gagandeep Kaur
Abstract:
The pesticide industry is a big supplier of agricultural inputs. The uses of pesticides control weeds, fungal diseases, etc., which causes of yield losses in agricultural production. In agribusiness and agrichemical industry, Globalization of markets, competition and innovation are the dominant trends. By the tradition of increasing the productivity of agro-systems through generic, universally applicable technologies, innovation in the agrichemical industry is limited. The marketing of technology of agriculture needs to deal with some various trends such as locally-organized forces that envision regionalized sustainable agriculture in the future. Agricultural production has changed dramatically over the past century. Before World War second agricultural production was featured as a low input of money, high labor, mixed farming and low yields. Although mineral fertilizers were applied already in the second half of the 19th century, most f the crops were restricted by local climatic, geological and ecological conditions. After World War second, in the period of reconstruction, political and socioeconomic pressure changed the nature of agricultural production. For a growing population, food security at low prices and securing farmer income at acceptable levels became political priorities. Current agricultural policy the new European common agricultural policy is aimed to reduce overproduction, liberalization of world trade and the protection of landscape and natural habitats. Farmers have to increase the quality of their productivity and they have to control costs because of increased competition from the world market. Pesticides should be more effective at lower application doses, less toxic and not pose a threat to groundwater. There is a big debate taking place about how and whether to mitigate the intensive use of pesticides. This debate is about the future of agriculture which is sustainable agriculture. This is possible by moving away from conventional agriculture. Conventional agriculture is featured as high inputs and high yields. The use of pesticides in conventional agriculture implies crop production in a wide range. To move away from conventional agriculture is possible through the gradual adoption of less disturbing and polluting agricultural practices at the level of the cropping system. For a healthy environment for crop production in the future there is a need for the maintenance of chemical, physical or biological properties. There is also required to minimize the emission of volatile compounds in the atmosphere. Companies are limiting themselves to a particular interpretation of sustainable development, characterized by technological optimism and production-maximizing. So the main objective of the paper will present the trends in the pesticide industry and in agricultural production in the era of Globalization. The second objective is to analyze sustainable agriculture. Companies of pesticides seem to have identified biotechnology as a promising alternative and supplement to the conventional business of selling pesticides. The agricultural sector is in the process of transforming its conventional mode of operation. Some experts give suggestions to farmers to move towards precision farming and some suggest engaging in organic farming. The methodology of the paper will be historical and analytical. Both primary and secondary sources will be used.Keywords: globalization, pesticides, sustainable development, organic farming
Procedia PDF Downloads 98166 Evaluating the ‘Assembled Educator’ of a Specialized Postgraduate Engineering Course Using Activity Theory and Genre Ecologies
Authors: Simon Winberg
Abstract:
The landscape of professional postgraduate education is changing: the focus of these programmes is moving from preparing candidates for a life in academia towards a focus of training in expert knowledge and skills to support industry. This is especially pronounced in engineering disciplines where increasingly more complex products are drawing on a depth of knowledge from multiple fields. This connects strongly with the broader notion of Industry 4.0 – where technology and society are being brought together to achieve more powerful and desirable products, but products whose inner workings also are more complex than before. The changes in what we do, and how we do it, has a profound impact on what industry would like universities to provide. One such change is the increased demand for taught doctoral and Masters programmes. These programmes aim to provide skills and training for professionals, to expand their knowledge of state-of-the-art tools and technologies. This paper investigates one such course, namely a Software Defined Radio (SDR) Master’s degree course. The teaching support for this course had to be drawn from an existing pool of academics, none of who were specialists in this field. The paper focuses on the kind of educator, a ‘hybrid academic’, assembled from available academic staff and bolstered by research. The conceptual framework for this paper combines Activity Theory and Genre Ecology. Activity Theory is used to reason about learning and interactions during the course, and Genre Ecology is used to model building and sharing of technical knowledge related to using tools and artifacts. Data were obtained from meetings with students and lecturers, logs, project reports, and course evaluations. The findings show how the course, which was initially academically-oriented, metamorphosed into a tool-dominant peer-learning structure, largely supported by the sharing of technical tool-based knowledge. While the academic staff could address gaps in the participants’ fundamental knowledge of radio systems, the participants brought with them extensive specialized knowledge and tool experience which they shared with the class. This created a complicated dynamic in the class, which centered largely on engagements with technology artifacts, such as simulators, from which knowledge was built. The course was characterized by a richness of ‘epistemic objects’, which is to say objects that had knowledge-generating qualities. A significant portion of the course curriculum had to be adapted, and the learning methods changed to accommodate the dynamic interactions that occurred during classes. This paper explains the SDR Masters course in terms of conflicts and innovations in its activity system, as well as the continually hybridizing genre ecology to show how the structuring and resource-dependence of the course transformed from its initial ‘traditional’ academic structure to a more entangled arrangement over time. It is hoped that insights from this paper would benefit other educators involved in the design and teaching of similar types of specialized professional postgraduate taught programmes.Keywords: professional postgraduate education, taught masters, engineering education, software defined radio
Procedia PDF Downloads 92165 Parameter Selection and Monitoring for Water-Powered Percussive Drilling in Green-Fields Mineral Exploration
Authors: S. J. Addinell, T. Richard, B. Evans
Abstract:
The Deep Exploration Technologies Cooperative Research Centre (DET CRC) is researching and developing a new coiled tubing based greenfields mineral exploration drilling system utilising downhole water powered percussive drill tooling. This new drilling system is aimed at significantly reducing the costs associated with identifying mineral resource deposits beneath deep, barron cover. This system has shown superior rates of penetration in water-rich hard rock formations at depths exceeding 500 meters. Several key challenges exist regarding the deployment and use of these bottom hole assemblies for mineral exploration, and this paper discusses some of the key technical challenges. This paper presents experimental results obtained from the research program during laboratory and field testing of the prototype drilling system. A study of the morphological aspects of the cuttings generated during the percussive drilling process is presented and shows a strong power law relationship for particle size distributions. Several percussive drilling parameters such as RPM, applied fluid pressure and weight on bit have been shown to influence the particle size distributions of the cuttings generated. This has direct influence on other drilling parameters such as flow loop performance, cuttings dewatering, and solids control. Real-time, accurate knowledge of percussive system operating parameters will assist the driller in maximising the efficiency of the drilling process. The applied fluid flow, fluid pressure, and rock properties are known to influence the natural oscillating frequency of the percussive hammer, but this paper also shows that drill bit design, drill bit wear and the applied weight on bit can also influence the oscillation frequency. Due to the changing drilling conditions and therefore changing operating parameters, real-time understanding of the natural operating frequency is paramount to achieving system optimisation. Several techniques to understand the oscillating frequency have been investigated and presented. With a conventional top drive drilling rig, spectral analysis of applied fluid pressure, hydraulic feed force pressure, hold back pressure and drill string vibrations have shown the presence of the operating frequency of the bottom hole tooling. Unfortunately, however, with the implementation of a coiled tubing drilling rig, implementing a positive displacement downhole motor to provide drill bit rotation, these signals are not available for interrogation at the surface and therefore another method must be considered. The investigation and analysis of ground vibrations using geophone sensors, similar to seismic-while-drilling techniques have indicated the presence of the natural oscillating frequency of the percussive hammer. This method is shown to provide a robust technique for the determination of the downhole percussive oscillation frequency when used with a coiled tubing drill rig.Keywords: cuttings characterization, drilling optimization, oscillation frequency, percussive drilling, spectral analysis
Procedia PDF Downloads 230164 Unleashing the Power of Cerebrospinal System for a Better Computer Architecture
Authors: Lakshmi N. Reddi, Akanksha Varma Sagi
Abstract:
Studies on biomimetics are largely developed, deriving inspiration from natural processes in our objective world to develop novel technologies. Recent studies are diverse in nature, making their categorization quite challenging. Based on an exhaustive survey, we developed categorizations based on either the essential elements of nature - air, water, land, fire, and space, or on form/shape, functionality, and process. Such diverse studies as aircraft wings inspired by bird wings, a self-cleaning coating inspired by a lotus petal, wetsuits inspired by beaver fur, and search algorithms inspired by arboreal ant path networks lend themselves to these categorizations. Our categorizations of biomimetic studies allowed us to define a different dimension of biomimetics. This new dimension is not restricted to inspiration from the objective world. It is based on the premise that the biological processes observed in the objective world find their reflections in our human bodies in a variety of ways. For example, the lungs provide the most efficient example for liquid-gas phase exchange, the heart exemplifies a very efficient pumping and circulatory system, and the kidneys epitomize the most effective cleaning system. The main focus of this paper is to bring out the magnificence of the cerebro-spinal system (CSS) insofar as it relates to our current computer architecture. In particular, the paper uses four key measures to analyze the differences between CSS and human- engineered computational systems. These are adaptability, sustainability, energy efficiency, and resilience. We found that the cerebrospinal system reveals some important challenges in the development and evolution of our current computer architectures. In particular, the myriad ways in which the CSS is integrated with other systems/processes (circulatory, respiration, etc) offer useful insights on how the human-engineered computational systems could be made more sustainable, energy-efficient, resilient, and adaptable. In our paper, we highlight the energy consumption differences between CSS and our current computational designs. Apart from the obvious differences in materials used between the two, the systemic nature of how CSS functions provides clues to enhance life-cycles of our current computational systems. The rapid formation and changes in the physiology of dendritic spines and their synaptic plasticity causing memory changes (ex., long-term potentiation and long-term depression) allowed us to formulate differences in the adaptability and resilience of CSS. In addition, the CSS is sustained by integrative functions of various organs, and its robustness comes from its interdependence with the circulatory system. The paper documents and analyzes quantifiable differences between the two in terms of the four measures. Our analyses point out the possibilities in the development of computational systems that are more adaptable, sustainable, energy efficient, and resilient. It concludes with the potential approaches for technological advancement through creation of more interconnected and interdependent systems to replicate the effective operation of cerebro-spinal system.Keywords: cerebrospinal system, computer architecture, adaptability, sustainability, resilience, energy efficiency
Procedia PDF Downloads 99163 Assessment of On-Site Solar and Wind Energy at a Manufacturing Facility in Ireland
Authors: A. Sgobba, C. Meskell
Abstract:
The feasibility of on-site electricity production from solar and wind and the resulting load management for a specific manufacturing plant in Ireland are assessed. The industry sector accounts directly and indirectly for a high percentage of electricity consumption and global greenhouse gas emissions; therefore, it will play a key role in emission reduction and control. Manufacturing plants, in particular, are often located in non-residential areas since they require open spaces for production machinery, parking facilities for the employees, appropriate routes for supply and delivery, special connections to the national grid and other environmental impacts. Since they have larger spaces compared to commercial sites in urban areas, they represent an appropriate case study for evaluating the technical and economic viability of energy system integration with low power density technologies, such as solar and wind, for on-site electricity generation. The available open space surrounding the analysed manufacturing plant can be efficiently used to produce a discrete quantity of energy, instantaneously and locally consumed. Therefore, transmission and distribution losses can be reduced. The usage of storage is not required due to the high and almost constant electricity consumption profile. The energy load of the plant is identified through the analysis of gas and electricity consumption, both internally monitored and reported on the bills. These data are not often recorded and available to third parties since manufacturing companies usually keep track only of the overall energy expenditures. The solar potential is modelled for a period of 21 years based on global horizontal irradiation data; the hourly direct and diffuse radiation and the energy produced by the system at the optimum pitch angle are calculated. The model is validated using PVWatts and SAM tools. Wind speed data are available for the same period within one-hour step at a height of 10m. Since the hub of a typical wind turbine reaches a higher altitude, complementary data for a different location at 50m have been compared, and a model for the estimate of wind speed at the required height in the right location is defined. Weibull Statistical Distribution is used to evaluate the wind energy potential of the site. The results show that solar and wind energy are, as expected, generally decoupled. Based on the real case study, the percentage of load covered every hour by on-site generation (Level of Autonomy LA) and the resulting electricity bought from the grid (Expected Energy Not Supplied EENS) are calculated. The economic viability of the project is assessed through Net Present Value, and the influence the main technical and economic parameters have on NPV is presented. Since the results show that the analysed renewable sources can not provide enough electricity, the integration with a cogeneration technology is studied. Finally, the benefit to energy system integration of wind, solar and a cogeneration technology is evaluated and discussed.Keywords: demand, energy system integration, load, manufacturing, national grid, renewable energy sources
Procedia PDF Downloads 129162 Facilitating the Learning Environment as a Servant Leader: Empowering Self-Directed Student Learning
Authors: Thomas James Bell III
Abstract:
Pedagogy is thought of as one's philosophy, theory, or teaching method. This study examines the science of learning, considering the forced reconsideration of effective pedagogy brought on by the aftermath of the 2020 coronavirus pandemic. With the aid of various technologies, online education holds challenges and promises to enhance the learning environment if implemented to facilitate student learning. Behaviorism centers around the belief that the instructor is the sage on the classroom stage using repetition techniques as the primary learning instrument. This approach to pedagogy ascribes complete control of the learning environment and works best for students to learn by allowing students to answer questions with immediate feedback. Such structured learning reinforcement tends to guide students' learning without considering learners' independence and individual reasoning. And such activities may inadvertently stifle the student's ability to develop critical thinking and self-expression skills. Fundamentally liberationism pedagogy dismisses the concept that education is merely about students learning things and more about the way students learn. Alternatively, the liberationist approach democratizes the classroom by redefining the role of the teacher and student. The teacher is no longer viewed as the sage on the stage but as a guide on the side. Instead, this approach views students as creators of knowledge and not empty vessels to be filled with knowledge. Moreover, students are well suited to decide how best to learn and which areas improvements are needed. This study will explore the classroom instructor as a servant leader in the twenty-first century, which allows students to integrate technology that encapsulates more individual learning styles. The researcher will examine the Professional Scrum Master (PSM I) exam pass rate results of 124 students in six sections of an Agile scrum course. The students will be separated into two groups; the first group will follow a structured instructor-led course outlined by a course syllabus. The second group will consist of several small teams (ten or fewer) of self-led and self-empowered students. The teams will conduct several event meetings that include sprint planning meetings, daily scrums, sprint reviews, and retrospective meetings throughout the semester will the instructor facilitating the teams' activities as needed. The methodology for this study will use the compare means t-test to compare the mean of an exam pass rate in one group to the mean of the second group. A one-tailed test (i.e., less than or greater than) will be used with the null hypothesis, for the difference between the groups in the population will be set to zero. The major findings will expand the pedagogical approach that suggests pedagogy primarily exist in support of teacher-led learning, which has formed the pillars of traditional classroom teaching. But in light of the fourth industrial revolution, there is a fusion of learning platforms across the digital, physical, and biological worlds with disruptive technological advancements in areas such as the Internet of Things (IoT), artificial intelligence (AI), 3D printing, robotics, and others.Keywords: pedagogy, behaviorism, liberationism, flipping the classroom, servant leader instructor, agile scrum in education
Procedia PDF Downloads 142161 Mixed Monolayer and PEG Linker Approaches to Creating Multifunctional Gold Nanoparticles
Authors: D. Dixon, J. Nicol, J. A. Coulter, E. Harrison
Abstract:
The ease with which they can be functionalized, combined with their excellent biocompatibility, make gold nanoparticles (AuNPs) ideal candidates for various applications in nanomedicine. Indeed several promising treatments are currently undergoing human clinical trials (CYT-6091 and Auroshell). A successful nanoparticle treatment must first evade the immune system, then accumulate within the target tissue, before enter the diseased cells and delivering the payload. In order to create a clinically relevant drug delivery system, contrast agent or radiosensitizer, it is generally necessary to functionalize the AuNP surface with multiple groups; e.g. Polyethylene Glycol (PEG) for enhanced stability, targeting groups such as antibodies, peptides for enhanced internalization, and therapeutic agents. Creating and characterizing the biological response of such complex systems remains a challenge. The two commonly used methods to attach multiple groups to the surface of AuNPs are the creation of a mixed monolayer, or by binding groups to the AuNP surface using a bi-functional PEG linker. While some excellent in-vitro and animal results have been reported for both approaches further work is necessary to directly compare the two methods. In this study AuNPs capped with both PEG and a Receptor Mediated Endocytosis (RME) peptide were prepared using both mixed monolayer and PEG linker approaches. The PEG linker used was SH-PEG-SGA which has a thiol at one end for AuNP attachment, and an NHS ester at the other to bind to the peptide. The work builds upon previous studies carried out at the University of Ulster which have investigated AuNP synthesis, the influence of PEG on stability in a range of media and investigated intracellular payload release. 18-19nm citrate capped AuNPs were prepared using the Turkevich method via the sodium citrate reduction of boiling 0.01wt% Chloroauric acid. To produce PEG capped AuNPs, the required amount of PEG-SH (5000Mw) or SH-PEG-SGA (3000Mw Jenkem Technologies) was added, and the solution stirred overnight at room temperature. The RME (sequence: CKKKKKKSEDEYPYVPN, Biomatik) co-functionalised samples were prepared by adding the required amount of peptide to the PEG capped samples and stirring overnight. The appropriate amounts of PEG-SH and RME peptide were added to the AuNP to produce a mixed monolayer consisting of approximately 50% PEG and 50% RME. The PEG linker samples were first fully capped with bi-functional PEG before being capped with RME peptide. An increase in diameter from 18-19mm for the ‘as synthesized’ AuNPs to 40-42nm after PEG capping was observed via DLS. The presence of PEG and RME peptide on both the mixed monolayer and PEG linker co-functionalized samples was confirmed by both FTIR and TGA. Bi-functional PEG linkers allow the entire AuNP surface to be capped with PEG, enabling in-vitro stability to be achieved using a lower molecular weight PEG. The approach also allows the entire outer surface to be coated with peptide or other biologically active groups, whilst also offering the promise of enhanced biological availability. The effect of mixed monolayer versus PEG linker attachment on both stability and non-specific protein corona interactions was also studied.Keywords: nanomedicine, gold nanoparticles, PEG, biocompatibility
Procedia PDF Downloads 340160 A Vision Making Exercise for Twente Region; Development and Assesment
Authors: Gelareh Ghaderi
Abstract:
the overall objective of this study is to develop two alternative plans of spatial and infrastructural development for the Netwerkstad Twente (Twente region) until 2040 and to assess the impacts of those two alternative plans. This region is located on the eastern border of the Netherlands, and it comprises of five municipalities. Based on the strengths and opportunities of the five municipalities of the Netwerkstad Twente, and in order develop the region internationally, strengthen the job market and retain skilled and knowledgeable young population, two alternative visions have been developed; environmental oriented vision, and economical oriented vision. Environmental oriented vision is based mostly on preserving beautiful landscapes. Twente would be recognized as an educational center, driven by green technologies and environment-friendly economy. Market-oriented vision is based on attracting and developing different economic activities in the region based on visions of the five cities of Netwerkstad Twente, in order to improve the competitiveness of the region in national and international scale. On the basis of the two developed visions and strategies for achieving the visions, land use and infrastructural development are modeled and assessed. Based on the SWOT analysis, criteria were formulated and employed in modeling the two contrasting land use visions by the year 2040. Land use modeling consists of determination of future land use demand, assessment of suitability land (Suitability analysis), and allocation of land uses on suitable land. Suitability analysis aims to determine the available supply of land for future development as well as assessing their suitability for specific type of land uses on the basis of the formulated set of criteria. Suitability analysis was operated using CommunityViz, a Planning Support System application for spatially explicit land suitability and allocation. Netwerkstad Twente has highly developed transportation infrastructure, consists of highways network, national road network, regional road network, street network, local road network, railway network and bike-path network. Based on the assumptions of speed limitations on different types of roads provided, infrastructure accessibility level of predicted land use parcels by four different transport modes is investigated. For evaluation of the two development scenarios, the Multi-criteria Evaluation (MCE) method is used. The first step was to determine criteria used for evaluation of each vision. All factors were categorized as economical, ecological and social. Results of Multi-criteria Evaluation show that Environmental oriented cities scenario has higher overall score. Environment-oriented scenario has impressive scores in relation to economical and ecological factors. This is due to the fact that a large percentage of housing tends towards compact housing. Twente region has immense potential, and the success of this project will define the Eastern part of The Netherlands and create a real competitive local economy with innovations and attractive environment as its backbone.Keywords: economical oriented vision, environmental oriented vision, infrastructure, land use, multi criteria assesment, vision
Procedia PDF Downloads 227159 Effects of AI-driven Applications on Bank Performance in West Africa
Authors: Ani Wilson Uchenna, Ogbonna Chikodi
Abstract:
This study examined the impact of artificial intelligence driven applications on banks’ performance in West Africa using Nigeria and Ghana as case studies. Specifically, the study examined the extent to which deployment of smart automated teller machine impacts the banks’ net worth within the reference period in Nigeria and Ghana. It ascertained the impact of point of sale on banks’ net worth within the reference period in Nigeria and Ghana. Thirdly, it verified the extent to which webpay services can influence banks’ performance in Nigeria and Ghana and finally, determined the impact of mobile pay services on banks’ performance in Nigeria and Ghana. The study used automated teller machine (ATM), Point of sale services (POS), Mobile pay services (MOP) and Web pay services (WBP) as proxies for explanatory variables while Bank net worth was used as explained variable for the study. The data for this study were sourced from central bank of Nigeria (CBN) Statistical Bulletin as well as Bank of Ghana (BoGH) Statistical Bulletin, Ghana payment systems oversight annual report and world development indicator (WDI). Furthermore, the mixed order of integration observed from the panel unit test result justified the use of autoregressive distributed lag (ARDL) approach to data analysis which the study adopted. While the cointegration test showed the existence of cointegration among the studied variables, bound test result justified the presence of long-run relationship among the series. Again, ARDL error correction estimate established satisfactory (13.92%) speed of adjustment from long run disequilibrium back to short run dynamic relationship. The study found that while Automated teller machine (ATM) had statistically significant impact on bank net worth (BNW) of Nigeria and Ghana, point of sale services application (POS) statistically and significantly impact on bank net worth within the study period, mobile pay services application was statistically significant in impacting the changes in the bank net worth of the countries of study while web pay services (WBP) had no statistically significant impact on bank net worth of the countries of reference. The study concluded that artificial intelligence driven application have significant an positive impact on bank performance with exception of web pay which had negative impact on bank net worth. The study recommended that management of banks both in Nigerian and Ghanaian should encourage more investments in AI-powered smart ATMs aimed towards delivering more secured banking services in order to increase revenue, discourage excessive queuing in the banking hall, reduced fraud and minimize error in processing transaction. Banks within the scope of this study should leverage on modern technologies to checkmate the excesses of the private operators POS in order to build more confidence on potential customers. Government should convert mobile pay services to a counter terrorism tool by ensuring that restrictions on over-the-counter withdrawals to a minimum amount is maintained and place sanctions on withdrawals above that limit.Keywords: artificial intelligence (ai), bank performance, automated teller machines (atm), point of sale (pos)
Procedia PDF Downloads 9158 Field-Testing a Digital Music Notebook
Authors: Rena Upitis, Philip C. Abrami, Karen Boese
Abstract:
The success of one-on-one music study relies heavily on the ability of the teacher to provide sufficient direction to students during weekly lessons so that they can successfully practice from one lesson to the next. Traditionally, these instructions are given in a paper notebook, where the teacher makes notes for the students after describing a task or demonstrating a technique. The ability of students to make sense of these notes varies according to their understanding of the teacher’s directions, their motivation to practice, their memory of the lesson, and their abilities to self-regulate. At best, the notes enable the student to progress successfully. At worst, the student is left rudderless until the next lesson takes place. Digital notebooks have the potential to provide a more interactive and effective bridge between music lessons than traditional pen-and-paper notebooks. One such digital notebook, Cadenza, was designed to streamline and improve teachers’ instruction, to enhance student practicing, and to provide the means for teachers and students to communicate between lessons. For example, Cadenza contains a video annotator, where teachers can offer real-time guidance on uploaded student performances. Using the checklist feature, teachers and students negotiate the frequency and type of practice during the lesson, which the student can then access during subsequent practice sessions. Following the tenets of self-regulated learning, goal setting and reflection are also featured. Accordingly, the present paper addressed the following research questions: (1) How does the use of the Cadenza digital music notebook engage students and their teachers?, (2) Which features of Cadenza are most successful?, (3) Which features could be improved?, and (4) Is student learning and motivation enhanced with the use of the Cadenza digital music notebook? The paper describes the results 10 months of field-testing of Cadenza, structured around the four research questions outlined. Six teachers and 65 students took part in the study. Data were collected through video-recorded lesson observations, digital screen captures, surveys, and interviews. Standard qualitative protocols for coding results and identifying themes were employed to analyze the results. The results consistently indicated that teachers and students embraced the digital platform offered by Cadenza. The practice log and timer, the real-time annotation tool, the checklists, the lesson summaries, and the commenting features were found to be the most valuable functions, by students and teachers alike. Teachers also reported that students progressed more quickly with Cadenza, and received higher results in examinations than those students who were not using Cadenza. Teachers identified modifications to Cadenza that would make it an even more powerful way to support student learning. These modifications, once implemented, will move the tool well past its traditional notebook uses to new ways of motivating students to practise between lessons and to communicate with teachers about their learning. Improvements to the tool called for by the teachers included the ability to duplicate archived lessons, allowing for split screen viewing, and adding goal setting to the teacher window. In the concluding section, proposed modifications and their implications for self-regulated learning are discussed.Keywords: digital music technologies, electronic notebooks, self-regulated learning, studio music instruction
Procedia PDF Downloads 254157 Integrative Omics-Portrayal Disentangles Molecular Heterogeneity and Progression Mechanisms of Cancer
Authors: Binder Hans
Abstract:
Cancer is no longer seen as solely a genetic disease where genetic defects such as mutations and copy number variations affect gene regulation and eventually lead to aberrant cell functioning which can be monitored by transcriptome analysis. It has become obvious that epigenetic alterations represent a further important layer of (de-)regulation of gene activity. For example, aberrant DNA methylation is a hallmark of many cancer types, and methylation patterns were successfully used to subtype cancer heterogeneity. Hence, unraveling the interplay between different omics levels such as genome, transcriptome and epigenome is inevitable for a mechanistic understanding of molecular deregulation causing complex diseases such as cancer. This objective requires powerful downstream integrative bioinformatics methods as an essential prerequisite to discover the whole genome mutational, transcriptome and epigenome landscapes of cancer specimen and to discover cancer genesis, progression and heterogeneity. Basic challenges and tasks arise ‘beyond sequencing’ because of the big size of the data, their complexity, the need to search for hidden structures in the data, for knowledge mining to discover biological function and also systems biology conceptual models to deduce developmental interrelations between different cancer states. These tasks are tightly related to cancer biology as an (epi-)genetic disease giving rise to aberrant genomic regulation under micro-environmental control and clonal evolution which leads to heterogeneous cellular states. Machine learning algorithms such as self organizing maps (SOM) represent one interesting option to tackle these bioinformatics tasks. The SOMmethod enables recognizing complex patterns in large-scale data generated by highthroughput omics technologies. It portrays molecular phenotypes by generating individualized, easy to interpret images of the data landscape in combination with comprehensive analysis options. Our image-based, reductionist machine learning methods provide one interesting perspective how to deal with massive data in the discovery of complex diseases, gliomas, melanomas and colon cancer on molecular level. As an important new challenge, we address the combined portrayal of different omics data such as genome-wide genomic, transcriptomic and methylomic ones. The integrative-omics portrayal approach is based on the joint training of the data and it provides separate personalized data portraits for each patient and data type which can be analyzed by visual inspection as one option. The new method enables an integrative genome-wide view on the omics data types and the underlying regulatory modes. It is applied to high and low-grade gliomas and to melanomas where it disentangles transversal and longitudinal molecular heterogeneity in terms of distinct molecular subtypes and progression paths with prognostic impact.Keywords: integrative bioinformatics, machine learning, molecular mechanisms of cancer, gliomas and melanomas
Procedia PDF Downloads 148156 Alternate Optical Coherence Tomography Technologies in Use for Corneal Diseases Diagnosis in Dogs and Cats
Authors: U. E. Mochalova, A. V. Demeneva, Shilkin A. G., J. Yu. Artiushina
Abstract:
Objective. In medical ophthalmology OCT has been actively used in the last decade. It is a modern non-invasive method of high-precision hardware examination, which gives a detailed cross-sectional image of eye tissues structure with a high level of resolution, which provides in vivo morphological information at the microscopic level about corneal tissue, structures of the anterior segment, retina and optic nerve. The purpose of this study was to explore the possibility of using the OCT technology in complex ophthalmological examination in dogs and cats, to characterize the revealed pathological structural changes in corneal tissue in cats and dogs with some of the most common corneal diseases. Procedures. Optical coherence tomography of the cornea was performed in 112 animals: 68 dogs and 44 cats. In total, 224 eyes were examined. Pathologies of the organ of vision included: dystrophy and degeneration of the cornea, endothelial corneal dystrophy, dry eye syndrome, chronic superficial vascular keratitis, pigmented keratitis, corneal erosion, ulcerative stromal keratitis, corneal sequestration, chronic glaucoma and also postoperative period after performed keratoplasty. When performing OCT, we used certified medical devices: "Huvitz HOCT-1/1F», «Optovue iVue 80» and "SOCT Copernicus Revo (60)". Results. The results of a clinical study on the use of optical coherence tomography (OCT)of the cornea in cats and dogs, performed by the authors of the article in the complex diagnosis of keratopathies of variousorigins: endothelial corneal dystrophy, pigmented keratitis, chronic keratoconjunctivitis, chronic herpetic keratitis, ulcerative keratitis, traumatic corneal damage, sequestration of the cornea of cats, chronic keratitis, complicating the course of glaucoma. The characteristics of the OCT scans are givencorneas of cats and dogs that do not have corneal pathologies. OCT scans of various corneal pathologies in dogs and cats with a description of the revealed pathological changes are presented. Of great clinical interest are the data obtained during OCT of the cornea of animals undergoing keratoplasty operations using various forms of grafts. Conclusions. OCT makes it possible to assess the thickness and pathological structural changes of the corneal surface epithelium, corneal stroma and descemet membrane. We can measure them, determine the exact localization, and record pathological changes. Clinical observation of the dynamics of the pathological process in the cornea using OCT makes it possible to evaluate the effectiveness of drug treatment. In case of negative dynamics of corneal disease, it is necessary to determine the indications for surgical treatment (to assess the thickness of the cornea, the localization of its thinning zones, to characterize the depth and area of pathological changes). According to the OCT of the cornea, it is possible to choose the optimal surgical treatment for the patient, the technique and depth of optically constructive surgery (penetrating or anterior lamellar keratoplasty).; determine the depth and diameter of the planned microsurgical trepanation of corneal tissue, which will ensure good adaptation of the edges of the donor material.Keywords: optical coherence tomography, corneal sequestration, optical coherence tomography of the cornea, corneal transplantation, cat, dog
Procedia PDF Downloads 70