Search results for: fast vs slow BTI
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2423

Search results for: fast vs slow BTI

653 From News Breakers to News Followers: The Influence of Facebook on the Coverage of the January 2010 Crisis in Jos

Authors: T. Obateru, Samuel Olaniran

Abstract:

In an era when the new media is affording easy access to packaging and dissemination of information, the social media have become a popular avenue for sharing information for good or ill. It is evident that the traditional role of journalists as ‘news breakers’ is fast being eroded. People now share information on happenings via the social media like Facebook, Twitter and the rest, such that journalists themselves now get leads on happenings from such sources. Beyond the access to information provided by the new media is the erosion of the gatekeeping role of journalists who by their training and calling, are supposed to handle information with responsibility. Thus, sensitive information that journalists would normally filter is randomly shared by social media activists. This was the experience of journalists in Jos, Plateau State in January 2010 when another of the recurring ethnoreligious crisis that engulfed the state resulted in another widespread killing, vandalism, looting, and displacements. Considered as one of the high points of crises in the state, journalists who had the duty of covering the crisis also relied on some of these sources to get their bearing on the violence. This paper examined the role of Facebook in the work of journalists who covered the 2010 crisis. Taking the gatekeeping perspective, it interrogated the extent to which Facebook impacted their professional duty positively or negatively vis-à-vis the peace journalism model. It employed survey to elicit information from 50 journalists who covered the crisis using questionnaire as instrument. The paper revealed that the dissemination of hate information via mobile phones and social media, especially Facebook, aggravated the crisis situation. Journalists became news followers rather than news breakers because a lot of them were put on their toes by information (many of which were inaccurate or false) circulated on Facebook. It recommended that journalists must remain true to their calling by upholding their ‘gatekeeping’ role of disseminating only accurate and responsible information if they would remain the main source of credible information on which their audience rely.

Keywords: crisis, ethnoreligious, Facebook, journalists

Procedia PDF Downloads 274
652 Vulnerability Assessment of Vertically Irregular Structures during Earthquake

Authors: Pranab Kumar Das

Abstract:

Vulnerability assessment of buildings with irregularity in the vertical direction has been carried out in this study. The constructions of vertically irregular buildings are increasing in the context of fast urbanization in the developing countries including India. During two reconnaissance based survey performed after Nepal earthquake 2015 and Imphal (India) earthquake 2016, it has been observed that so many structures are damaged due to the vertically irregular configuration. These irregular buildings are necessary to perform safely during seismic excitation. Therefore, it is very urgent demand to point out the actual vulnerability of the irregular structure. So that remedial measures can be taken for protecting those structures during natural hazard as like earthquake. This assessment will be very helpful for India and as well as for the other developing countries. A sufficient number of research has been contributed to the vulnerability of plan asymmetric buildings. In the field of vertically irregular buildings, the effort has not been forwarded much to find out their vulnerability during an earthquake. Irregularity in vertical direction may be caused due to irregular distribution of mass, stiffness and geometrically irregular configuration. Detailed analysis of such structures, particularly non-linear/ push over analysis for performance based design seems to be challenging one. The present paper considered a number of models of irregular structures. Building models made of both reinforced concrete and brick masonry are considered for the sake of generality. The analyses are performed with both help of finite element method and computational method.The study, as a whole, may help to arrive at a reasonably good estimate, insight for fundamental and other natural periods of such vertically irregular structures. The ductility demand, storey drift, and seismic response study help to identify the location of critical stress concentration. Summarily, this paper is a humble step for understanding the vulnerability and framing up the guidelines for vertically irregular structures.

Keywords: ductility, stress concentration, vertically irregular structure, vulnerability

Procedia PDF Downloads 216
651 A Perspective on Education to Support Industry 4.0: An Exploratory Study in the UK

Authors: Sin Ying Tan, Mohammed Alloghani, A. J. Aljaaf, Abir Hussain, Jamila Mustafina

Abstract:

Industry 4.0 is a term frequently used to describe the new upcoming industry era. Higher education institutions aim to prepare students to fulfil the future industry needs. Advancement of digital technology has paved the way for the evolution of education and technology. Evolution of education has proven its conservative nature and a high level of resistance to changes and transformation. The gap between the industry's needs and competencies offered generally by education is revealing the increasing need to find new educational models to face the future. The aim of this study was to identify the main issues faced by both universities and students in preparing the future workforce. From December 2018 to April 2019, a regional qualitative study was undertaken in Liverpool, United Kingdom (UK). Interviews were conducted with employers, faculty members and undergraduate students, and the results were analyzed using the open coding method. Four main issues had been identified, which are the characteristics of the future workforce, student's readiness to work, expectations on different roles played at the tertiary education level and awareness of the latest trends. The finding of this paper concluded that the employers and academic practitioners agree that their expectations on each other’s roles are different and in order to face the rapidly changing technology era, students should not only have the right skills, but they should also have the right attitude in learning. Therefore, the authors address this issue by proposing a learning framework known as 'ASK SUMA' framework as a guideline to support the students, academicians and employers in meeting the needs of 'Industry 4.0'. Furthermore, this technology era requires the employers, academic practitioners and students to work together in order to face the upcoming challenges and fast-changing technologies. It is also suggested that an interactive system should be provided as a platform to support the three different parties to play their roles.

Keywords: attitude, expectations, industry needs, knowledge, skills

Procedia PDF Downloads 107
650 Studying the Evolution of Soot and Precursors in Turbulent Flames Using Laser Diagnostics

Authors: Muhammad A. Ashraf, Scott Steinmetz, Matthew J. Dunn, Assaad R. Masri

Abstract:

This study focuses on the evolution of soot and soot precursors in three different piloted diffusion turbulent flames. The fuel composition is as follow flame A (ethylene/nitrogen, 2:3 by volume), flame B (ethylene/air, 2:3 by volume), and flame C (pure methane). These flames are stabilized using a 4mm diameter jet surrounded by a pilot annulus with an outer diameter of 15 mm. The pilot issues combustion products from stoichiometric premixed flames of hydrogen, acetylene, and air. In all cases, the jet Reynolds number is 10,000, and air flows in the coflow stream at a velocity of 5 m/s. Time-resolved laser-induced fluorescence (LIF) is collected at two wavelength bands in the visible (445 nm) and UV regions (266 nm) along with laser-induced incandescence (LII). The combined results are employed to study concentration, size, and growth of soot and precursors. A set of four fast photo-multiplier tubes are used to record emission data in temporal domain. A 266nm laser pulse preferentially excites smaller nanoparticles which emit a fluorescence spectrum which is analysed to track the presence, evolution, and destruction of nanoparticles. A 1064nm laser pulse excites sufficiently large soot particles, and the resulting incandescence is collected at 1064nm. At downstream and outer radial locations, intermittency becomes a relevant factor. Therefore, data collected in turbulent flames is conditioned to account for intermittency so that the resulting mean profiles for scattering, fluorescence, and incandescence are shown for the events that contain traces of soot. It is found that in the upstream regions of the ethylene-air and ethylene-nitrogen flames, the presence of soot precursors is rather similar. However, further downstream, soot concentration grows larger in the ethylene-air flames.

Keywords: laser induced incandescence, laser induced fluorescence, soot, nanoparticles

Procedia PDF Downloads 127
649 The Design of a Computer Simulator to Emulate Pathology Laboratories: A Model for Optimising Clinical Workflows

Authors: M. Patterson, R. Bond, K. Cowan, M. Mulvenna, C. Reid, F. McMahon, P. McGowan, H. Cormican

Abstract:

This paper outlines the design of a simulator to allow for the optimisation of clinical workflows through a pathology laboratory and to improve the laboratory’s efficiency in the processing, testing, and analysis of specimens. Often pathologists have difficulty in pinpointing and anticipating issues in the clinical workflow until tests are running late or in error. It can be difficult to pinpoint the cause and even more difficult to predict any issues which may arise. For example, they often have no indication of how many samples are going to be delivered to the laboratory that day or at a given hour. If we could model scenarios using past information and known variables, it would be possible for pathology laboratories to initiate resource preparations, e.g. the printing of specimen labels or to activate a sufficient number of technicians. This would expedite the clinical workload, clinical processes and improve the overall efficiency of the laboratory. The simulator design visualises the workflow of the laboratory, i.e. the clinical tests being ordered, the specimens arriving, current tests being performed, results being validated and reports being issued. The simulator depicts the movement of specimens through this process, as well as the number of specimens at each stage. This movement is visualised using an animated flow diagram that is updated in real time. A traffic light colour-coding system will be used to indicate the level of flow through each stage (green for normal flow, orange for slow flow, and red for critical flow). This would allow pathologists to clearly see where there are issues and bottlenecks in the process. Graphs would also be used to indicate the status of specimens at each stage of the process. For example, a graph could show the percentage of specimen tests that are on time, potentially late, running late and in error. Clicking on potentially late samples will display more detailed information about those samples, the tests that still need to be performed on them and their urgency level. This would allow any issues to be resolved quickly. In the case of potentially late samples, this could help to ensure that critically needed results are delivered on time. The simulator will be created as a single-page web application. Various web technologies will be used to create the flow diagram showing the workflow of the laboratory. JavaScript will be used to program the logic, animate the movement of samples through each of the stages and to generate the status graphs in real time. This live information will be extracted from an Oracle database. As well as being used in a real laboratory situation, the simulator could also be used for training purposes. ‘Bots’ would be used to control the flow of specimens through each step of the process. Like existing software agents technology, these bots would be configurable in order to simulate different situations, which may arise in a laboratory such as an emerging epidemic. The bots could then be turned on and off to allow trainees to complete the tasks required at that step of the process, for example validating test results.

Keywords: laboratory-process, optimization, pathology, computer simulation, workflow

Procedia PDF Downloads 268
648 Comparison between High Resolution Ultrasonography and Magnetic Resonance Imaging in Assessment of Musculoskeletal Disorders Causing Ankle Pain

Authors: Engy S. El-Kayal, Mohamed M. S. Arafa

Abstract:

There are various causes of ankle pain including traumatic and non-traumatic causes. Various imaging techniques are available for assessment of AP. MRI is considered to be the imaging modality of choice for ankle joint evaluation with an advantage of its high spatial resolution, multiplanar capability, hence its ability to visualize small complex anatomical structures around the ankle. However, the high costs and the relatively limited availability of MRI systems, as well as the relatively long duration of the examination all are considered disadvantages of MRI examination. Therefore there is a need for a more rapid and less expensive examination modality with good diagnostic accuracy to fulfill this gap. HRU has become increasingly important in the assessment of ankle disorders, with advantages of being fast, reliable, of low cost and readily available. US can visualize detailed anatomical structures and assess tendinous and ligamentous integrity. The aim of this study was to compare the diagnostic accuracy of HRU with MRI in the assessment of patients with AP. We included forty patients complaining of AP. All patients were subjected to real-time HRU and MRI of the affected ankle. Results of both techniques were compared to surgical and arthroscopic findings. All patients were examined according to a defined protocol that includes imaging the tendon tears or tendinitis, muscle tears, masses, or fluid collection, ligament sprain or tears, inflammation or fluid effusion within the joint or bursa, bone and cartilage lesions, erosions and osteophytes. Analysis of the results showed that the mean age of patients was 38 years. The study comprised of 24 women (60%) and 16 men (40%). The accuracy of HRU in detecting causes of AP was 85%, while the accuracy of MRI in the detection of causes of AP was 87.5%. In conclusions: HRU and MRI are two complementary tools of investigation with the former will be used as a primary tool of investigation and the latter will be used to confirm the diagnosis and the extent of the lesion especially when surgical interference is planned.

Keywords: ankle pain (AP), high-resolution ultrasound (HRU), magnetic resonance imaging (MRI) ultrasonography (US)

Procedia PDF Downloads 174
647 A Review of Emerging Technologies in Antennas and Phased Arrays for Avionics Systems

Authors: Muhammad Safi, Abdul Manan

Abstract:

In recent years, research in aircraft avionics systems (i.e., radars and antennas) has grown revolutionary. Aircraft technology is experiencing an increasing inclination from all mechanical to all electrical aircraft, with the introduction of inhabitant air vehicles and drone taxis over the last few years. This develops an overriding need to summarize the history, latest trends, and future development in aircraft avionics research for a better understanding and development of new technologies in the domain of avionics systems. This paper focuses on the future trends in antennas and phased arrays for avionics systems. Along with the general overview of the future avionics trend, this work describes the review of around 50 high-quality research papers on aircraft communication systems. Electric-powered aircraft have been a hot topic in the modern aircraft world. Electric aircraft have supremacy over their conventional counterparts. Due to increased drone taxi and urban air mobility, fast and reliable communication is very important, so concepts of Broadband Integrated Digital Avionics Information Exchange Networks (B-IDAIENs) and Modular Avionics are being researched for better communication of future aircraft. A Ku-band phased array antenna based on a modular design can be used in a modular avionics system. Furthermore, integrated avionics is also emerging research in future avionics. The main focus of work in future avionics will be using integrated modular avionics and infra-red phased array antennas, which are discussed in detail in this paper. Other work such as reconfigurable antennas and optical communication, are also discussed in this paper. The future of modern aircraft avionics would be based on integrated modulated avionics and small artificial intelligence-based antennas. Optical and infrared communication will also replace microwave frequencies.

Keywords: AI, avionics systems, communication, electric aircrafts, infra-red, integrated avionics, modular avionics, phased array, reconfigurable antenna, UAVs

Procedia PDF Downloads 52
646 Business Feasibility of Online Marketing of Food and Beverages Products in India

Authors: Dimpy Shah

Abstract:

The global economy has substantially changed in last three decades. Now almost all markets are transparent and visible for global customers. The corporates are now no more reliant on local markets for trade. The information technology revolution has changed business dynamics and marketing practices of corporate. The markets are divided into two different formats: traditional and virtual. In very short span of time, many e-commerce portals have captured global market. This strategy is well supported by global delivery system of multinational logistic companies. Now the markets are dealing with global supply chain networks, which are more demand driven and customer oriented. The corporate have realized importance of supply chain integration and marketing in this competitive environment. The Indian markets are also significantly affected with all these changes. In terms of population, India is in second place after China. In terms of demography, almost half of the population is of youth. It has been observed that the Indian youth are more inclined towards e-commerce and prefer to buy goods from web portal. Initially, this trend was observed in Indian service sector, textile and electronic goods and now further extended in other product categories. The FMCG companies have also recognized this change and started integration of their supply chain with e-commerce platform. This paper attempts to understand contemporary marketing practices of corporate in e-commerce business in Indian food and beverages segment and also tries to identify innovative marketing practices for proper execution of their strategies. The findings are mainly focused on supply chain re-integration and brand building strategies with proper utilization of social media.

Keywords: FMCG (Fast Moving Consumer Goods), ISCM (Integrated supply chain management), RFID (Radio Frequency Identification), traditional and virtual formats

Procedia PDF Downloads 249
645 A Sustainable Approach for Waste Management: Automotive Waste Transformation into High Value Titanium Nitride Ceramic

Authors: Mohannad Mayyas, Farshid Pahlevani, Veena Sahajwalla

Abstract:

Automotive shredder residue (ASR) is an industrial waste, generated during the recycling process of End-of-life vehicles. The large increasing production volumes of ASR and its hazardous content have raised concerns worldwide, leading some countries to impose more restrictions on ASR waste disposal and encouraging researchers to find efficient solutions for ASR processing. Although a great deal of research work has been carried out, all proposed solutions, to our knowledge, remain commercially and technically unproven. While the volume of waste materials continues to increase, the production of materials from new sustainable sources has become of great importance. Advanced ceramic materials such as nitrides, carbides and borides are widely used in a variety of applications. Among these ceramics, a great deal of attention has been recently paid to Titanium nitride (TiN) owing to its unique characteristics. In our study, we propose a new sustainable approach for ASR management where TiN nanoparticles with ideal particle size ranging from 200 to 315 nm can be synthesized as a by-product. In this approach, TiN is thermally synthesized by nitriding pressed mixture of automotive shredder residue (ASR) incorporated with titanium oxide (TiO2). Results indicated that TiO2 influences and catalyses degradation reactions of ASR and helps to achieve fast and full decomposition. In addition, the process resulted in titanium nitride (TiN) ceramic with several unique structures (porous nanostructured, polycrystalline, micro-spherical and nano-sized structures) that were simply obtained by tuning the ratio of TiO2 to ASR, and a product with appreciable TiN content of around 85% was achieved after only one hour nitridation at 1550 °C.

Keywords: automotive shredder residue, nano-ceramics, waste treatment, titanium nitride, thermal conversion

Procedia PDF Downloads 276
644 Efficient Compact Micro Dielectric Barrier Discharge (DBD) Plasma Reactor for Ozone Generation for Industrial Application in Liquid and Gas Phase Systems

Authors: D. Kuvshinov, A. Siswanto, J. Lozano-Parada, W. Zimmerman

Abstract:

Ozone is well known as a powerful fast reaction rate oxidant. The ozone based processes produce no by-product left as a non-reacted ozone returns back to the original oxygen molecule. Therefore an application of ozone is widely accepted as one of the main directions for a sustainable and clean technologies development. There are number of technologies require ozone to be delivered to specific points of a production network or reactors construction. Due to space constrains, high reactivity and short life time of ozone the use of ozone generators even of a bench top scale is practically limited. This requires development of mini/micro scale ozone generator which can be directly incorporated into production units. Our report presents a feasibility study of a new micro scale rector for ozone generation (MROG). Data on MROG calibration and indigo decomposition at different operation conditions are presented. At selected operation conditions with residence time of 0.25 s the process of ozone generation is not limited by reaction rate and the amount of ozone produced is a function of power applied. It was shown that the MROG is capable to produce ozone at voltage level starting from 3.5kV with ozone concentration of 5.28E-6 (mol/L) at 5kV. This is in line with data presented on numerical investigation for a MROG. It was shown that in compare to a conventional ozone generator, MROG has lower power consumption at low voltages and atmospheric pressure. The MROG construction makes it applicable for emerged and dry systems. With a robust compact design MROG can be used as incorporated unit for production lines of high complexity.

Keywords: dielectric barrier discharge (DBD), micro reactor, ozone, plasma

Procedia PDF Downloads 317
643 Development of an Electrochemical Aptasensor for the Detection of Human Osteopontin Protein

Authors: Sofia G. Meirinho, Luis G. Dias, António M. Peres, Lígia R. Rodrigues

Abstract:

The emerging development of electrochemical aptasen sors has enabled the easy and fast detection of protein biomarkers in standard and real samples. Biomarkers are produced by body organs or tumours and provide a measure of antigens on cell surfaces. When detected in high amounts in blood, they can be suggestive of tumour activity. These biomarkers are more often used to evaluate treatment effects or to assess the potential for metastatic disease in patients with established disease. Osteopontin (OPN) is a protein found in all body fluids and constitutes a possible biomarker because its overexpression has been related with breast cancer evolution and metastasis. Currently, biomarkers are commonly used for the development of diagnostic methods, allowing the detection of the disease in its initial stages. A previously described RNA aptamer was used in the current work to develop a simple and sensitive electrochemical aptasensor with high affinity for human OPN. The RNA aptamer was biotinylated and immobilized on a gold electrode by avidin-biotin interaction. The electrochemical signal generated from the aptamer–target molecule interaction was monitored electrochemically using cyclic voltammetry in the presence of [Fe (CN) 6]−3/− as a redox probe. The signal observed showed a current decrease due to the binding of OPN. The preliminary results showed that this aptasensor enables the detection of OPN in standard solutions, showing good selectivity towards the target in the presence of others interfering proteins such as bovine OPN and bovine serum albumin. The results gathered in the current work suggest that the proposed electrochemical aptasensor is a simple and sensitive detection tool for human OPN and so, may have future applications in cancer disease monitoring.

Keywords: osteopontin, aptamer, aptasensor, screen-printed electrode, cyclic voltammetry

Procedia PDF Downloads 409
642 Advancing Food System Resilience by Pseudocereals Utilization

Authors: Yevheniia Varyvoda, Douglas Taren

Abstract:

At the aggregate level, climate variability, the rising number of active violent conflicts, globalization and industrialization of agriculture, the loss in diversity of crop species, the increase in demand for agricultural production, and the adoption of healthy and sustainable dietary patterns are exacerbating factors of food system destabilization. The importance of pseudocereals to fuel and sustain resilient food systems is recognized by leading organizations working to end hunger, particularly for their critical capability to diversify livelihood portfolios and provide plant-sourced healthy nutrition in the face of systemic shocks and stresses. Amaranth, buckwheat, and quinoa are the most promising and used pseudocereals for ensuring food system resilience in the reality of climate change due to their high nutritional profile, good digestibility, palatability, medicinal value, abiotic stress tolerance, pest and disease resistance, rapid growth rate, adaptability to marginal and degraded lands, high genetic variability, low input requirements, and income generation capacity. The study provides the rationale and examples of advancing local and regional food systems' resilience by scaling up the utilization of amaranth, buckwheat, and quinoa along all components of food systems to architect indirect nutrition interventions and climate-smart approaches. Thus, this study aims to explore the drivers for ancient pseudocereal utilization, the potential resilience benefits that can be derived from using them, and the challenges and opportunities for pseudocereal utilization within the food system components. The PSALSAR framework regarding the method for conducting systematic review and meta-analysis for environmental science research was used to answer these research questions. Nevertheless, the utilization of pseudocereals has been slow for a number of reasons, namely the increased production of commercial and major staples such as maize, rice, wheat, soybean, and potato, the displacement due to pressure from imported crops, lack of knowledge about value-adding practices in food supply chain, limited technical knowledge and awareness about nutritional and health benefits, absence of marketing channels and limited access to extension services and information about resilient crops. The success of climate-resilient pathways based on pseudocereal utilization underlines the importance of co-designed activities that use modern technologies, high-value traditional knowledge of underutilized crops, and a strong acknowledgment of cultural norms to increase community-level economic and food system resilience.

Keywords: resilience, pseudocereals, food system, climate change

Procedia PDF Downloads 59
641 Analytical Slope Stability Analysis Based on the Statistical Characterization of Soil Shear Strength

Authors: Bernardo C. P. Albuquerque, Darym J. F. Campos

Abstract:

Increasing our ability to solve complex engineering problems is directly related to the processing capacity of computers. By means of such equipments, one is able to fast and accurately run numerical algorithms. Besides the increasing interest in numerical simulations, probabilistic approaches are also of great importance. This way, statistical tools have shown their relevance to the modelling of practical engineering problems. In general, statistical approaches to such problems consider that the random variables involved follow a normal distribution. This assumption tends to provide incorrect results when skew data is present since normal distributions are symmetric about their means. Thus, in order to visualize and quantify this aspect, 9 statistical distributions (symmetric and skew) have been considered to model a hypothetical slope stability problem. The data modeled is the friction angle of a superficial soil in Brasilia, Brazil. Despite the apparent universality, the normal distribution did not qualify as the best fit. In the present effort, data obtained in consolidated-drained triaxial tests and saturated direct shear tests have been modeled and used to analytically derive the probability density function (PDF) of the safety factor of a hypothetical slope based on Mohr-Coulomb rupture criterion. Therefore, based on this analysis, it is possible to explicitly derive the failure probability considering the friction angle as a random variable. Furthermore, it is possible to compare the stability analysis when the friction angle is modelled as a Dagum distribution (distribution that presented the best fit to the histogram) and as a Normal distribution. This comparison leads to relevant differences when analyzed in light of the risk management.

Keywords: statistical slope stability analysis, skew distributions, probability of failure, functions of random variables

Procedia PDF Downloads 318
640 Luminescent Functionalized Graphene Oxide Based Sensitive Detection of Deadly Explosive TNP

Authors: Diptiman Dinda, Shyamal Kumar Saha

Abstract:

In the 21st century, sensitive and selective detection of trace amounts of explosives has become a serious problem. Generally, nitro compound and its derivatives are being used worldwide to prepare different explosives. Recently, TNP (2, 4, 6 trinitrophenol) is the most commonly used constituent to prepare powerful explosives all over the world. It is even powerful than TNT or RDX. As explosives are electron deficient in nature, it is very difficult to detect one separately from a mixture. Again, due to its tremendous water solubility, detection of TNP in presence of other explosives from water is very challenging. Simple instrumentation, cost-effective, fast and high sensitivity make fluorescence based optical sensing a grand success compared to other techniques. Graphene oxide (GO), with large no of epoxy grps, incorporate localized nonradiative electron-hole centres on its surface to give very weak fluorescence. In this work, GO is functionalized with 2, 6-diamino pyridine to remove those epoxy grps. through SN2 reaction. This makes GO into a bright blue luminescent fluorophore (DAP/rGO) which shows an intense PL spectrum at ∼384 nm when excited at 309 nm wavelength. We have also characterized the material by FTIR, XPS, UV, XRD and Raman measurements. Using this as fluorophore, a large fluorescence quenching (96%) is observed after addition of only 200 µL of 1 mM TNP in water solution. Other nitro explosives give very moderate PL quenching compared to TNP. Such high selectivity is related to the operation of FRET mechanism from fluorophore to TNP during this PL quenching experiment. TCSPC measurement also reveals that the lifetime of DAP/rGO drastically decreases from 3.7 to 1.9 ns after addition of TNP. Our material is also quite sensitive to 125 ppb level of TNP. Finally, we believe that this graphene based luminescent material will emerge a new class of sensing materials to detect trace amounts of explosives from aqueous solution.

Keywords: graphene, functionalization, fluorescence quenching, FRET, nitroexplosive detection

Procedia PDF Downloads 412
639 Chronolgy and Developments in Inventory Control Best Practices for FMCG Sector

Authors: Roopa Singh, Anurag Singh, Ajay

Abstract:

Agriculture contributes a major share in the national economy of India. A major portion of Indian economy (about 70%) depends upon agriculture as it forms the main source of income. About 43% of India’s geographical area is used for agricultural activity which involves 65-75% of total population of India. The given work deals with the Fast moving Consumer Goods (FMCG) industries and their inventories which use agricultural produce as their raw material or input for their final product. Since the beginning of inventory practices, many developments took place which can be categorised into three phases, based on the review of various works. The first phase is related with development and utilization of Economic Order Quantity (EOQ) model and methods for optimizing costs and profits. Second phase deals with inventory optimization method, with the purpose of balancing capital investment constraints and service level goals. The third and recent phase has merged inventory control with electrical control theory. Maintenance of inventory is considered negative, as a large amount of capital is blocked especially in mechanical and electrical industries. But the case is different in food processing and agro-based industries and their inventories due to cyclic variation in the cost of raw materials of such industries which is the reason for selection of these industries in the mentioned work. The application of electrical control theory in inventory control makes the decision-making highly instantaneous for FMCG industries without loss in their proposed profits, which happened earlier during first and second phases, mainly due to late implementation of decision. The work also replaces various inventories and work-in-progress (WIP) related errors with their monetary values, so that the decision-making is fully target-oriented.

Keywords: control theory, inventory control, manufacturing sector, EOQ, feedback, FMCG sector

Procedia PDF Downloads 338
638 Alternating Expectation-Maximization Algorithm for a Bilinear Model in Isoform Quantification from RNA-Seq Data

Authors: Wenjiang Deng, Tian Mou, Yudi Pawitan, Trung Nghia Vu

Abstract:

Estimation of isoform-level gene expression from RNA-seq data depends on simplifying assumptions, such as uniform reads distribution, that are easily violated in real data. Such violations typically lead to biased estimates. Most existing methods provide a bias correction step(s), which is based on biological considerations, such as GC content–and applied in single samples separately. The main problem is that not all biases are known. For example, new technologies such as single-cell RNA-seq (scRNA-seq) may introduce new sources of bias not seen in bulk-cell data. This study introduces a method called XAEM based on a more flexible and robust statistical model. Existing methods are essentially based on a linear model Xβ, where the design matrix X is known and derived based on the simplifying assumptions. In contrast, XAEM considers Xβ as a bilinear model with both X and β unknown. Joint estimation of X and β is made possible by simultaneous analysis of multi-sample RNA-seq data. Compared to existing methods, XAEM automatically performs empirical correction of potentially unknown biases. XAEM implements an alternating expectation-maximization (AEM) algorithm, alternating between estimation of X and β. For speed XAEM utilizes quasi-mapping for read alignment, thus leading to a fast algorithm. Overall XAEM performs favorably compared to other recent advanced methods. For simulated datasets, XAEM obtains higher accuracy for multiple-isoform genes, particularly for paralogs. In a differential-expression analysis of a real scRNA-seq dataset, XAEM achieves substantially greater rediscovery rates in an independent validation set.

Keywords: alternating EM algorithm, bias correction, bilinear model, gene expression, RNA-seq

Procedia PDF Downloads 126
637 Preparation, Characterisation, and Measurement of the in vitro Cytotoxicity of Mesoporous Silica Nanoparticles Loaded with Cytotoxic Pt(II) Oxadiazoline Complexes

Authors: G. Wagner, R. Herrmann

Abstract:

Cytotoxic platinum compounds play a major role in the chemotherapy of a large number of human cancers. However, due to the severe side effects for the patient and other problems associated with their use, there is a need for the development of more efficient drugs and new methods for their selective delivery to the tumours. One way to achieve the latter could be in the use of nanoparticular substrates that can adsorb or chemically bind the drug. In the cell, the drug is supposed to be slowly released, either by physical desorption or by dissolution of the particle framework. Ideally, the cytotoxic properties of the platinum drug unfold only then, in the cancer cell and over a longer period of time due to the gradual release. In this paper, we report on our first steps in this direction. The binding properties of a series of cytotoxic Pt(II) oxadiazoline compounds to mesoporous silica particles has been studied by NMR and UV/vis spectroscopy. High loadings were achieved when the Pt(II) compound was relatively polar, and has been dissolved in a relatively nonpolar solvent before the silica was added. Typically, 6-10 hours were required for complete equilibration, suggesting the adsorption did not only occur to the outer surface but also to the interior of the pores. The untreated and Pt(II) loaded particles were characterised by C, H, N combustion analysis, BET/BJH nitrogen sorption, electron microscopy (REM and TEM) and EDX. With the latter methods we were able to demonstrate the homogenous distribution of the Pt(II) compound on and in the silica particles, and no Pt(II) bulk precipitate had formed. The in vitro cytotoxicity in a human cancer cell line (HeLa) has been determined for one of the new platinum compounds adsorbed to mesoporous silica particles of different size, and compared with the corresponding compound in solution. The IC50 data are similar in all cases, suggesting that the release of the Pt(II) compound was relatively fast and possibly occurred before the particles reached the cells. Overall, the platinum drug is chemically stable on silica and retained its activity upon prolonged storage.

Keywords: cytotoxicity, mesoporous silica, nanoparticles, platinum compounds

Procedia PDF Downloads 307
636 The Effectiveness of Cash Flow Management by SMEs in the Mafikeng Local Municipality of South Africa

Authors: Ateba Benedict Belobo, Faan Pelser, Ambe Marcus

Abstract:

Aims: This study arise from repeated complaints from both electronic mails about the underperformance of Mafikeng Small and Medium-Size enterprises after the global financial crisis. The authors were on the view that, this poor performance experienced could be as a result of the negative effects on the cash flow of these businesses due to volatilities in the business environment in general prior to the global crisis. Thus, the paper was mainly aimed at determining the shortcomings experienced by these SMEs with regards to cash flow management. It was also aimed at suggesting possible measures to improve cash flow management of these SMEs in this tough time. Methods: A case study was conducted on 3 beverage suppliers, 27 bottle stores, 3 largest fast consumer goods super markets and 7 automobiles enterprises in the Mafikeng local municipality. A mixed method research design was employed and a purposive sampling was used in selecting SMEs that participated. Views and experiences of participants of the paper were captured through in-depth interviews. Data from the empirical investigation were interpreted using open coding and a simple percentage formula. Results: Findings from the empirical research reflected that majority of Mafikeng SMEs suffer poor operational performance prior to the global financial crisis primarily as a result of poor cash flow management. However, the empirical outcome also indicted other secondary factors contributing to this poor operational performance. Conclusion: Finally, the authorsproposed possible measures that could be used to improve cash flow management and to solve other factors affecting operational performance of SMEs in the Mafikeng local municipality in other to achieve a better business performance.

Keywords: cash flow, business performance, global financial crisis, SMEs

Procedia PDF Downloads 413
635 Cybersecurity Engineering BS Degree Curricula Design Framework and Assessment

Authors: Atma Sahu

Abstract:

After 9/11, there will only be cyberwars. The cyberwars increase in intensity the country's cybersecurity workforce's hiring and retention issues. Currently, many organizations have unfilled cybersecurity positions, and to a lesser degree, their cybersecurity teams are understaffed. Therefore, there is a critical need to develop a new program to help meet the market demand for cybersecurity engineers (CYSE) and personnel. Coppin State University in the United States was responsible for developing a cybersecurity engineering BS degree program. The CYSE curriculum design methodology consisted of three parts. First, the ACM Cross-Cutting Concepts standard's pervasive framework helped curriculum designers and students explore connections among the core courses' knowledge areas and reinforce the security mindset conveyed in them. Second, the core course context was created to assist students in resolving security issues in authentic cyber situations involving cyber security systems in various aspects of industrial work while adhering to the NIST standards framework. The last part of the CYSE curriculum design aspect was the institutional student learning outcomes (SLOs) integrated and aligned in content courses, representing more detailed outcomes and emphasizing what learners can do over merely what they know. The CYSE program's core courses express competencies and learning outcomes using action verbs from Bloom's Revised Taxonomy. This aspect of the CYSE BS degree program's design is based on these three pillars: the ACM, NIST, and SLO standards, which all CYSE curriculum designers should know. This unique CYSE curriculum design methodology will address how students and the CYSE program will be assessed and evaluated. It is also critical that educators, program managers, and students understand the importance of staying current in this fast-paced CYSE field.

Keywords: cyber security, cybersecurity engineering, systems engineering, NIST standards, physical systems

Procedia PDF Downloads 64
634 Leadership and Entrepreneurship in Higher Education: Fostering Innovation and Sustainability

Authors: Naziema Begum Jappie

Abstract:

Leadership and entrepreneurship in higher education have become critical components in navigating the evolving landscape of academia in the 21st century. This abstract explores the multifaceted relationship between leadership and entrepreneurship within the realm of higher education, emphasizing their roles in fostering innovation and sustainability. Higher education institutions, often characterized as slow-moving and resistant to change, are facing unprecedented challenges. Globalization, rapid technological advancements, changing student demographics, and financial constraints necessitate a reimagining of traditional models. Leadership in higher education must embrace entrepreneurial thinking to effectively address these challenges. Entrepreneurship in higher education involves cultivating a culture of innovation, risk-taking, and adaptability. Visionary leaders who promote entrepreneurship within their institutions empower faculty and staff to think creatively, seek new opportunities, and engage with external partners. These entrepreneurial efforts lead to the development of novel programs, research initiatives, and sustainable revenue streams. Innovation in curriculum and pedagogy is a central aspect of leadership and entrepreneurship in higher education. Forward-thinking leaders encourage faculty to experiment with teaching methods and technology, fostering a dynamic learning environment that prepares students for an ever-changing job market. Entrepreneurial leadership also facilitates the creation of interdisciplinary programs that address emerging fields and societal challenges. Collaboration is key to entrepreneurship in higher education. Leaders must establish partnerships with industry, government, and non-profit organizations to enhance research opportunities, secure funding, and provide real-world experiences for students. Entrepreneurial leaders leverage their institutions' resources to build networks that extend beyond campus boundaries, strengthening their positions in the global knowledge economy. Financial sustainability is a pressing concern for higher education institutions. Entrepreneurial leadership involves diversifying revenue streams through innovative fundraising campaigns, partnerships, and alternative educational models. Leaders who embrace entrepreneurship are better equipped to navigate budget constraints and ensure the long-term viability of their institutions. In conclusion, leadership and entrepreneurship are intertwined elements essential to the continued relevance and success of higher education institutions. Visionary leaders who champion entrepreneurship foster innovation, enhance the student experience, and secure the financial future of their institutions. As academia continues to evolve, leadership and entrepreneurship will remain indispensable tools in shaping the future of higher education. This abstract underscores the importance of these concepts and their potential to drive positive change within the higher education landscape.

Keywords: entrepreneurship, higher education, innovation, leadership

Procedia PDF Downloads 47
633 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer

Procedia PDF Downloads 113
632 Exploring the Relationship between Organisational Identity and Value Systems: Reflecting on the Values-Crafting Process in a Multi-National Organisation within the Entertainment Industry

Authors: Dieter Veldsman, Theo Heyns Veldsman

Abstract:

The knowledge economy demands an organisation that is flexible, adaptable and able to navigate the ever-changing environment. This fast-paced environment has however resulted in an organizational landscape that battles to engage employees, retain top talent and create meaningful work for its members. In the knowledge economy, the concept of organizational identity has become an important consideration as organisations aim to create a compelling and inviting narrative for all stakeholders across the business value chain. Values are often seen as the behavioural framework that informs organisational culture, yet often values are perceived to be inauthentic and misaligned with the true character or identity of the organisation and how it is perceived by different role players. This paper focuses on exploring the relationship between organisational identity and value systems by focusing on a case study within a multi-national organisation within South Africa. The paper evaluates the implementation of mixed methods OD approach that gathered collaborative inputs of more than 4500 employees who participated in crafting the newly established values system post a retrenchment process. The paper will evaluate the relationship between the newly crafted value system and the identity of the organisation as described by various internal and external stakeholders in order to explore potential alignment, dissonance and key insights into understanding the relationship between organisational identity and values. The case study will be reported from the perspective of an OD consultant who supported the transformation process over a period of 8 months and aims to provide key insights into values and identity alignment within knowledge economy organisations. From a practical perspective, the paper provides insights into how values are created, perceived and lived within organisations and the impact on employee engagement and culture.

Keywords: culture, organisational development, organisational identity, values

Procedia PDF Downloads 288
631 Multi-Objective Optimization (Pareto Sets) and Multi-Response Optimization (Desirability Function) of Microencapsulation of Emamectin

Authors: Victoria Molina, Wendy Franco, Sergio Benavides, José M. Troncoso, Ricardo Luna, Jose R. PéRez-Correa

Abstract:

Emamectin Benzoate (EB) is a crystal antiparasitic that belongs to the avermectin family. It is one of the most common treatments used in Chile to control Caligus rogercresseyi in Atlantic salmon. However, the sea lice acquired resistance to EB when it is exposed at sublethal EB doses. The low solubility rate of EB and its degradation at the acidic pH in the fish digestive tract are the causes of the slow absorption of EB in the intestine. To protect EB from degradation and enhance its absorption, specific microencapsulation technologies must be developed. Amorphous Solid Dispersion techniques such as Spray Drying (SD) and Ionic Gelation (IG) seem adequate for this purpose. Recently, Soluplus® (SOL) has been used to increase the solubility rate of several drugs with similar characteristics than EB. In addition, alginate (ALG) is a widely used polymer in IG for biomedical applications. Regardless of the encapsulation technique, the quality of the obtained microparticles is evaluated with the following responses, yield (Y%), encapsulation efficiency (EE%) and loading capacity (LC%). In addition, it is important to know the percentage of EB released from the microparticles in gastric (GD%) and intestinal (ID%) digestions. In this work, we microencapsulated EB with SOL (EB-SD) and with ALG (EB-IG) using SD and IG, respectively. Quality microencapsulation responses and in vitro gastric and intestinal digestions at pH 3.35 and 7.8, respectively, were obtained. A central composite design was used to find the optimum microencapsulation variables (amount of EB, amount of polymer and feed flow). In each formulation, the behavior of these variables was predicted with statistical models. Then, the response surface methodology was used to find the best combination of the factors that allowed a lower EB release in gastric conditions, while permitting a major release at intestinal digestion. Two approaches were used to determine this. The desirability approach (DA) and multi-objective optimization (MOO) with multi-criteria decision making (MCDM). Both microencapsulation techniques allowed to maintain the integrity of EB in acid pH, given the small amount of EB released in gastric medium, while EB-IG microparticles showed greater EB release at intestinal digestion. For EB-SD, optimal conditions obtained with MOO plus MCDM yielded a good compromise among the microencapsulation responses. In addition, using these conditions, it is possible to reduce microparticles costs due to the reduction of 60% of BE regard the optimal BE proposed by (DA). For EB-GI, the optimization techniques used (DA and MOO) yielded solutions with different advantages and limitations. Applying DA costs can be reduced 21%, while Y, GD and ID showed 9.5%, 84.8% and 2.6% lower values than the best condition. In turn, MOO yielded better microencapsulation responses, but at a higher cost. Overall, EB-SD with operating conditions selected by MOO seems the best option, since a good compromise between costs and encapsulation responses was obtained.

Keywords: microencapsulation, multiple decision-making criteria, multi-objective optimization, Soluplus®

Procedia PDF Downloads 109
630 Brain-Computer Interface System for Lower Extremity Rehabilitation of Chronic Stroke Patients

Authors: Marc Sebastián-Romagosa, Woosang Cho, Rupert Ortner, Christy Li, Christoph Guger

Abstract:

Neurorehabilitation based on Brain-Computer Interfaces (BCIs) shows important rehabilitation effects for patients after stroke. Previous studies have shown improvements for patients that are in a chronic stage and/or have severe hemiparesis and are particularly challenging for conventional rehabilitation techniques. For this publication, seven stroke patients in the chronic phase with hemiparesis in the lower extremity were recruited. All of them participated in 25 BCI sessions about 3 times a week. The BCI system was based on the Motor Imagery (MI) of the paretic ankle dorsiflexion and healthy wrist dorsiflexion with Functional Electrical Stimulation (FES) and avatar feedback. Assessments were conducted to assess the changes in motor improvement before, after and during the rehabilitation training. Our primary measures used for the assessment were the 10-meters walking test (10MWT), Range of Motion (ROM) of the ankle dorsiflexion and Timed Up and Go (TUG). Results show a significant increase in the gait speed in the primary measure 10MWT fast velocity of 0.18 m/s IQR = [0.12 to 0.2], P = 0.016. The speed in the TUG was also significantly increased by 0.1 m/s IQR = [0.09 to 0.11], P = 0.031. The active ROM assessment increased 4.65º, and IQR = [ 1.67 - 7.4], after rehabilitation training, P = 0.029. These functional improvements persisted at least one month after the end of the therapy. These outcomes show the feasibility of this BCI approach for chronic stroke patients and further support the growing consensus that these types of tools might develop into a new paradigm for rehabilitation tools for stroke patients. However, the results are from only seven chronic stroke patients, so the authors believe that this approach should be further validated in broader randomized controlled studies involving more patients. MI and FES-based non-invasive BCIs are showing improvement in the gait rehabilitation of patients in the chronic stage after stroke. This could have an impact on the rehabilitation techniques used for these patients, especially when they are severely impaired and their mobility is limited.

Keywords: neuroscience, brain computer interfaces, rehabilitat, stroke

Procedia PDF Downloads 77
629 Reactivities of Turkish Lignites during Oxygen Enriched Combustion

Authors: Ozlem Uguz, Ali Demirci, Hanzade Haykiri-Acma, Serdar Yaman

Abstract:

Lignitic coal holds its position as Turkey’s most important indigenous energy source to generate energy in thermal power plants. Hence, efficient and environmental-friendly use of lignite in electricity generation is of great importance. Thus, clean coal technologies have been planned to mitigate emissions and provide more efficient burning in power plants. In this context, oxygen enriched combustion (oxy-combustion) is regarded as one of the clean coal technologies, which based on burning with oxygen concentrations higher than that in air. As it is known that the most of the Turkish coals are low rank with high mineral matter content, unburnt carbon trapped in ash is, unfortunately, high, and it leads significant losses in the overall efficiencies of the thermal plants. Besides, the necessity of burning huge amounts of these low calorific value lignites to get the desired amount of energy also results in the formation of large amounts of ash that is rich in unburnt carbon. Oxygen enriched combustion technology enables to increase the burning efficiency through the complete burning of almost all of the carbon content of the fuel. This also contributes to the protection of air quality and emission levels drop reasonably. The aim of this study is to investigate the unburnt carbon content and the burning reactivities of several different lignite samples under oxygen enriched conditions. For this reason, the combined effects of temperature and oxygen/nitrogen ratios in the burning atmosphere were investigated and interpreted. To do this, Turkish lignite samples from Adıyaman-Gölbaşı and Kütahya-Tunçbilek regions were characterized first by proximate and ultimate analyses and the burning profiles were derived using DTA (Differential Thermal Analysis) curves. Then, these lignites were subjected to slow burning process in a horizontal tube furnace at different temperatures (200ºC, 400ºC, 600ºC for Adıyaman-Gölbaşı lignite and 200ºC, 450ºC, 800ºC for Kütahya-Tunçbilek lignite) under atmospheres having O₂+N₂ proportions of 21%O₂+79%N₂, 30%O₂+70%N₂, 40%O₂+60%N₂, and 50%O₂+50%N₂. These burning temperatures were specified based on the burning profiles derived from the DTA curves. The residues obtained from these burning tests were also analyzed by proximate and ultimate analyses to detect the unburnt carbon content along with the unused energy potential. Reactivity of these lignites was calculated using several methodologies. Burning yield under air condition (21%O₂+79%N₂) was used a benchmark value to compare the effectiveness of oxygen enriched conditions. It was concluded that oxygen enriched combustion method enhanced the combustion efficiency and lowered the unburnt carbon content of ash. Combustion of low-rank coals under oxygen enriched conditions was found to be a promising way to improve the efficiency of the lignite-firing energy systems. However, cost-benefit analysis should be considered for a better justification of this method since the use of more oxygen brings an unignorable additional cost.

Keywords: coal, energy, oxygen enriched combustion, reactivity

Procedia PDF Downloads 256
628 Indigenous Hair Treatment in Abyssinia

Authors: Makda Yeshitela Kifele

Abstract:

Hair treatment prevents the hair from loss of volume, changing colour, and damaging its properties of the hair. Hair is the beauty of human beings that makes people beautiful and takes the other hearts to see them and to give them an appreciation for their effort to treat their hair and save it from damage. There are different methods to protect human hair from loss and damage that influence human psychology better than the problems. Chemicals products are available in the world that keeps safely the hair and provide beauty for the hair. But chemical products have side effects and are not cost-effective. Even some of the chemicals are allergic for users and left some changes in the hair. Indigenous hair treatment is an effective method that reduces the bad effects and the problems of the chemical that are lefts in human being’slife. Indigenous hair treatment can treat the hair safely and effectively that does not have much effect or spots in the human hair the users rather, it improves some attributes of the hair such that shine, quality, quantity improvements, length, and flexibility can be modified by these indigenous treatments. Rate is the local plant that plays a significant role in hair treatment. Rate is the local plant that can be available everywhere in the country, and anybody can be used for hair treatments. For this research, 50 women are identified as sample populations with different hair characteristics. The treatments were collected from the fields and squeezed into the pots to be prepared as specimens. The squeezed plants were deposited in the refrigerator for three days with some amounts of salts to prevent some bacteria. Chemical analysis has been done to sort out some detrimental substances. So the result showed that there are no detrimental substances that affect the hair properties and the health of the users. The sample population used the oil for one month without any other oily cosmetics that disturbs the treatment. The output is very effective and brings shining the hair, preventing greying of the hair, showing fast-growing, increasing the volume of the hair, and becoming flexible and curly, straight hair, thicker, and with no allergic effects.

Keywords: indigenous, chemicals, curly, treatment

Procedia PDF Downloads 89
627 Effect of Cardio-Specific Overexpression of MUL1, a Mitochondrial Protein on Myocardial Function

Authors: Ximena Calle, Plinio Cantero-López, Felipe Muñoz-Córdova, Mayarling-Francisca Troncoso, Sergio Lavandero, Valentina Parra

Abstract:

MUL1, a mitochondrial E3 ubiquitin ligase anchored to the outer mitochondrial membrane, is highly expressed in the heart. MUL1 is involved in multiple biological pathways associated with mitochondrial dynamics. Increased MUL1 affects the balance between fission and fusion, affecting mitochondrial function, which plays a crucial role in myocardial function. Therefore, it is interesting to evaluate the effect of cardiac-specific overexpression of MUL1 on myocardial function. Aim: To determine heart functionality in a mouse model with cardio-specific overexpression MUL1 protein. Methods and Results: Male C57BL/Tg transgenic mice with cardiomyocyte-specific overexpression of MUL1 (n=10) and control (n=4) were evaluated at 12, 27, and 35 weeks of age. Glucose tolerance curve determination was performed after a 6-hours fast to assess metabolic capacity, treadmill test, and systolic, and diastolic pressure was evaluated by the mouse tail-cuff blood pressure system equipment. The result showed no glucose tolerance curve, and the treadmill test demonstrated no significant changes between groups. However, substantial changes in diastolic function were observed by ultrasound and determination of cardiac hypertrophy proteins by western blot. Conclusions: Cardio-specific overexpression of MUL1 in mice without any treatment affects diastolic cardiac function, thus showing the important role contributed by MUL1 in the heart. Future research should evaluate the effect of cardiomyocyte-specific overexpression of MUL1 in pathological conditions such as a high-fat diet is one of the main risk factors for cardiovascular disease.

Keywords: diastolic dysfunction, hypertrophy cardiac, mitochondrial E3 ubiquitin ligase 1, MUL1

Procedia PDF Downloads 53
626 A User Interface for Easiest Way Image Encryption with Chaos

Authors: D. López-Mancilla, J. M. Roblero-Villa

Abstract:

Since 1990, the research on chaotic dynamics has received considerable attention, particularly in light of potential applications of this phenomenon in secure communications. Data encryption using chaotic systems was reported in the 90's as a new approach for signal encoding that differs from the conventional methods that use numerical algorithms as the encryption key. The algorithms for image encryption have received a lot of attention because of the need to find security on image transmission in real time over the internet and wireless networks. Known algorithms for image encryption, like the standard of data encryption (DES), have the drawback of low level of efficiency when the image is large. The encrypting based on chaos proposes a new and efficient way to get a fast and highly secure image encryption. In this work, a user interface for image encryption and a novel and easiest way to encrypt images using chaos are presented. The main idea is to reshape any image into a n-dimensional vector and combine it with vector extracted from a chaotic system, in such a way that the vector image can be hidden within the chaotic vector. Once this is done, an array is formed with the original dimensions of the image and turns again. An analysis of the security of encryption from the images using statistical analysis is made and is used a stage of optimization for image encryption security and, at the same time, the image can be accurately recovered. The user interface uses the algorithms designed for the encryption of images, allowing you to read an image from the hard drive or another external device. The user interface, encrypt the image allowing three modes of encryption. These modes are given by three different chaotic systems that the user can choose. Once encrypted image, is possible to observe the safety analysis and save it on the hard disk. The main results of this study show that this simple method of encryption, using the optimization stage, allows an encryption security, competitive with complicated encryption methods used in other works. In addition, the user interface allows encrypting image with chaos, and to submit it through any public communication channel, including internet.

Keywords: image encryption, chaos, secure communications, user interface

Procedia PDF Downloads 468
625 Combining in vitro Protein Expression with AlphaLISA Technology to Study Protein-Protein Interaction

Authors: Shayli Varasteh Moradi, Wayne A. Johnston, Dejan Gagoski, Kirill Alexandrov

Abstract:

The demand for a rapid and more efficient technique to identify protein-protein interaction particularly in the areas of therapeutics and diagnostics development is growing. The method described here is a rapid in vitro protein-protein interaction analysis approach based on AlphaLISA technology combined with Leishmania tarentolae cell-free protein production (LTE) system. Cell-free protein synthesis allows the rapid production of recombinant proteins in a multiplexed format. Among available in vitro expression systems, LTE offers several advantages over other eukaryotic cell-free systems. It is based on a fast growing fermentable organism that is inexpensive in cultivation and lysate production. High integrity of proteins produced in this system and the ability to co-express multiple proteins makes it a desirable method for screening protein interactions. Following the translation of protein pairs in LTE system, the physical interaction between proteins of interests is analysed by AlphaLISA assay. The assay is performed using unpurified in vitro translation reaction and therefore can be readily multiplexed. This approach can be used in various research applications such as epitope mapping, antigen-antibody analysis and protein interaction network mapping. The intra-viral protein interaction network of Zika virus was studied using the developed technique. The viral proteins were co-expressed pair-wise in LTE and all possible interactions among viral proteins were tested using AlphaLISA. The assay resulted to the identification of 54 intra-viral protein-protein interactions from which 19 binary interactions were found to be novel. The presented technique provides a powerful tool for rapid analysis of protein-protein interaction with high sensitivity and throughput.

Keywords: AlphaLISA technology, cell-free protein expression, epitope mapping, Leishmania tarentolae, protein-protein interaction

Procedia PDF Downloads 217
624 Clinical Manifestations, Pathogenesis and Medical Treatment of Stroke Caused by Basic Mitochondrial Abnormalities (Mitochondrial Encephalopathy, Lactic Acidosis, and Stroke-like Episodes, MELAS)

Authors: Wu Liching

Abstract:

Aim This case aims to discuss the pathogenesis, clinical manifestations and medical treatment of strokes caused by mitochondrial gene mutations. Methods Diagnosis of ischemic stroke caused by mitochondrial gene defect by means of "next-generation sequencing mitochondrial DNA gene variation detection", imaging examination, neurological examination, and medical history; this study took samples from the neurology ward of a medical center in northern Taiwan cases diagnosed with acute cerebral infarction as the research objects. Result This case is a 49-year-old married woman with a rare disease, mitochondrial gene mutation inducing ischemic stroke. She has severe hearing impairment and needs to use hearing aids, and has a history of diabetes. During the patient’s hospitalization, the blood test showed that serum Lactate: 7.72 mmol/L, Lactate (CSF) 5.9 mmol/L. Through the collection of relevant medical history, neurological evaluation showed changes in consciousness and cognition, slow response in language expression, and brain magnetic resonance imaging examination showed subacute bilateral temporal lobe infarction, which was an atypical type of stroke. The lineage DNA gene has m.3243A>G known pathogenic mutation point, and its heteroplasmic level is 24.6%. This pathogenic point is located in MITOMAP and recorded as Mitochondrial Encephalopathy, Lactic Acidosis, and Stroke-like episodes (MELAS) , Leigh Syndrome and other disease-related pathogenic loci, this mutation is located in ClinVar and recorded as Pathogenic (dbSNP: rs199474657), so it is diagnosed as a case of stroke caused by a rare disease mitochondrial gene mutation. After medical treatment, there was no more seizure during hospitalization. After interventional rehabilitation, the patient's limb weakness, poor language function, and cognitive impairment have all improved significantly. Conclusion Mitochondrial disorders can also be associated with abnormalities in psychological, neurological, cerebral cortical function, and autonomic functions, as well as problems with internal medical diseases. Therefore, the differential diagnoses cover a wide range and are not easy to be diagnosed. After neurological evaluation, medical history collection, imaging and rare disease serological examination, atypical ischemic stroke caused by rare mitochondrial gene mutation was diagnosed. We hope that through this case, the diagnosis of rare disease mitochondrial gene variation leading to cerebral infarction will be more familiar to clinical medical staff, and this case report may help to improve the clinical diagnosis and treatment for patients with similar clinical symptoms in the future.

Keywords: acute stroke, MELAS, lactic acidosis, mitochondrial disorders

Procedia PDF Downloads 51