Search results for: fast charging
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2066

Search results for: fast charging

626 Commercial Law Between Custom and Islamic Law

Authors: Shimaa Abdel-Rahman Amin El-Badawy

Abstract:

Commercial law is the set of legal rules that apply to business and regulates the trade of trade. The meaning of this is that the commercial law regulates certain relations only that arises as a result of carrying out certain businesses. which are business, as it regulates the activity of a specific sect, the sect of merchants, and the commercial law as other branches of the law has characteristics that distinguish it from other laws and various, and various sources from which its basis is derived from It is the objective or material source. the historical source, the official source and the interpretative source, and we are limited to official sources and explanatory sources. so what do you see what these sources are, and what is their degree and strength in taking it in commercial disputes. The first topic / characteristics of commercial law. Commercial law has become necessary for the world of trade and economics, which cannot be dispensed with, given the reasons that have been set as legal rules for commercial field.In fact, it is sufficient to refer to the stability and stability of the environment, and in exchange for the movement and the speed in which the commercial environment is in addition to confidence and credit. the characteristic of speed and the characteristic of trust, and credit are the ones that justify the existence of commercial law.Business is fast, while civil business is slow, stable and stability. The person concludes civil transactions in his life only a little. And before doing any civil action. he must have a period of thinking and scrutiny, and the investigation is the person who wants the husband, he must have a period of thinking and scrutiny. as if the person who wants to acquire a house to live with with his family, he must search and investigate. Discuss the price before the conclusion of a purchase contract. In the commercial field, transactions take place very quickly because the time factor has an important role in concluding deals and achieving profits. This is because the merchant in contracting about a specific deal would cause a loss to the merchant due to the linkage of the commercial law with the fluctuations of the economy and the market. The merchant may also conclude more than one deal in one and short time. And that is due to the absence of commercial law from the formalities and procedures that hinder commercial transactions.

Keywords: law, commercial law, Islamic law, custom and Islamic law

Procedia PDF Downloads 73
625 A Review on Applications of Evolutionary Algorithms to Reservoir Operation for Hydropower Production

Authors: Nkechi Neboh, Josiah Adeyemo, Abimbola Enitan, Oludayo Olugbara

Abstract:

Evolutionary algorithms are techniques extensively used in the planning and management of water resources and systems. It is useful in finding optimal solutions to water resources problems considering the complexities involved in the analysis. River basin management is an essential area that involves the management of upstream, river inflow and outflow including downstream aspects of a reservoir. Water as a scarce resource is needed by human and the environment for survival and its management involve a lot of complexities. Management of this scarce resource is necessary for proper distribution to competing users in a river basin. This presents a lot of complexities involving many constraints and conflicting objectives. Evolutionary algorithms are very useful in solving this kind of complex problems with ease. Evolutionary algorithms are easy to use, fast and robust with many other advantages. Many applications of evolutionary algorithms, which are population based search algorithm, are discussed. Different methodologies involved in the modeling and simulation of water management problems in river basins are explained. It was found from this work that different evolutionary algorithms are suitable for different problems. Therefore, appropriate algorithms are suggested for different methodologies and applications based on results of previous studies reviewed. It is concluded that evolutionary algorithms, with wide applications in water resources management, are viable and easy algorithms for most of the applications. The results suggested that evolutionary algorithms, applied in the right application areas, can suggest superior solutions for river basin management especially in reservoir operations, irrigation planning and management, stream flow forecasting and real-time applications. The future directions in this work are suggested. This study will assist decision makers and stakeholders on the best evolutionary algorithm to use in varied optimization issues in water resources management.

Keywords: evolutionary algorithm, multi-objective, reservoir operation, river basin management

Procedia PDF Downloads 491
624 Screening of Different Exotic Varieties of Potato through Adaptability Trial for Local Cultivation

Authors: Arslan Shehroz, Muhammad Amjad Ali, Amjad Abbas, Imran Ramzan, Muhammad Zunair Latif

Abstract:

Potato (Solanum tuberosum L.) is the 4th most important food crop of the world after wheat, rice and maize. It is the staple food in many European countries. Being rich in starch (one of the main three food ingredients) and having the highest productivity per unit area, has great potential to address the challenge of the food security. Processed potato is also used as chips and crisps etc as ‘fast food’. There are many biotic and abiotic factors which check the production of potato and become hurdle in achievement production potential of potato. 20 new varieties along with two checks were evaluated. Plant to plant and row to row distances were maintained as 20 cm and 75 cm, respectively. The trial was conducted according to the randomized complete block design with three replications. Normal agronomic and plant protection measures were carried out in the crop. It is revealed from the experiment that exotic variety 171 gave the highest yield of 35.5 t/ha followed by Masai with 31.0 t/ha tuber yield. The check variety Simply Red 24.2 t/ha yield, while the lowest tuber yield (1.5 t/ha) was produced by the exotic variety KWS-06-125. The maximum emergence was shown by the Variety Red Sun (89.7 %). The lowest emergence was shown by the variety Camel (71.7%). Regarding tuber grades, it was noted that the maximum Ration size tubers were produced by the exotic variety Compass (3.7%), whereas 11 varieties did not produce ration size tubers at all. The variety Red Sun produced lowest percentage of small size tubers (12.7%) whereas maximum small size tubers (93.0%) were produced by the variety Jitka. Regarding disease infestation, it was noted that the maximum scab incidence (4.0%) was recorded on the variety Masai, maximum rhizoctonia attack (60.0%) was recorded on the variety Camel and maximum tuber cracking (0.7%) was noted on the variety Vendulla.

Keywords: check variety, potato, potential and yield, trial

Procedia PDF Downloads 378
623 From News Breakers to News Followers: The Influence of Facebook on the Coverage of the January 2010 Crisis in Jos

Authors: T. Obateru, Samuel Olaniran

Abstract:

In an era when the new media is affording easy access to packaging and dissemination of information, the social media have become a popular avenue for sharing information for good or ill. It is evident that the traditional role of journalists as ‘news breakers’ is fast being eroded. People now share information on happenings via the social media like Facebook, Twitter and the rest, such that journalists themselves now get leads on happenings from such sources. Beyond the access to information provided by the new media is the erosion of the gatekeeping role of journalists who by their training and calling, are supposed to handle information with responsibility. Thus, sensitive information that journalists would normally filter is randomly shared by social media activists. This was the experience of journalists in Jos, Plateau State in January 2010 when another of the recurring ethnoreligious crisis that engulfed the state resulted in another widespread killing, vandalism, looting, and displacements. Considered as one of the high points of crises in the state, journalists who had the duty of covering the crisis also relied on some of these sources to get their bearing on the violence. This paper examined the role of Facebook in the work of journalists who covered the 2010 crisis. Taking the gatekeeping perspective, it interrogated the extent to which Facebook impacted their professional duty positively or negatively vis-à-vis the peace journalism model. It employed survey to elicit information from 50 journalists who covered the crisis using questionnaire as instrument. The paper revealed that the dissemination of hate information via mobile phones and social media, especially Facebook, aggravated the crisis situation. Journalists became news followers rather than news breakers because a lot of them were put on their toes by information (many of which were inaccurate or false) circulated on Facebook. It recommended that journalists must remain true to their calling by upholding their ‘gatekeeping’ role of disseminating only accurate and responsible information if they would remain the main source of credible information on which their audience rely.

Keywords: crisis, ethnoreligious, Facebook, journalists

Procedia PDF Downloads 294
622 Vulnerability Assessment of Vertically Irregular Structures during Earthquake

Authors: Pranab Kumar Das

Abstract:

Vulnerability assessment of buildings with irregularity in the vertical direction has been carried out in this study. The constructions of vertically irregular buildings are increasing in the context of fast urbanization in the developing countries including India. During two reconnaissance based survey performed after Nepal earthquake 2015 and Imphal (India) earthquake 2016, it has been observed that so many structures are damaged due to the vertically irregular configuration. These irregular buildings are necessary to perform safely during seismic excitation. Therefore, it is very urgent demand to point out the actual vulnerability of the irregular structure. So that remedial measures can be taken for protecting those structures during natural hazard as like earthquake. This assessment will be very helpful for India and as well as for the other developing countries. A sufficient number of research has been contributed to the vulnerability of plan asymmetric buildings. In the field of vertically irregular buildings, the effort has not been forwarded much to find out their vulnerability during an earthquake. Irregularity in vertical direction may be caused due to irregular distribution of mass, stiffness and geometrically irregular configuration. Detailed analysis of such structures, particularly non-linear/ push over analysis for performance based design seems to be challenging one. The present paper considered a number of models of irregular structures. Building models made of both reinforced concrete and brick masonry are considered for the sake of generality. The analyses are performed with both help of finite element method and computational method.The study, as a whole, may help to arrive at a reasonably good estimate, insight for fundamental and other natural periods of such vertically irregular structures. The ductility demand, storey drift, and seismic response study help to identify the location of critical stress concentration. Summarily, this paper is a humble step for understanding the vulnerability and framing up the guidelines for vertically irregular structures.

Keywords: ductility, stress concentration, vertically irregular structure, vulnerability

Procedia PDF Downloads 229
621 A Perspective on Education to Support Industry 4.0: An Exploratory Study in the UK

Authors: Sin Ying Tan, Mohammed Alloghani, A. J. Aljaaf, Abir Hussain, Jamila Mustafina

Abstract:

Industry 4.0 is a term frequently used to describe the new upcoming industry era. Higher education institutions aim to prepare students to fulfil the future industry needs. Advancement of digital technology has paved the way for the evolution of education and technology. Evolution of education has proven its conservative nature and a high level of resistance to changes and transformation. The gap between the industry's needs and competencies offered generally by education is revealing the increasing need to find new educational models to face the future. The aim of this study was to identify the main issues faced by both universities and students in preparing the future workforce. From December 2018 to April 2019, a regional qualitative study was undertaken in Liverpool, United Kingdom (UK). Interviews were conducted with employers, faculty members and undergraduate students, and the results were analyzed using the open coding method. Four main issues had been identified, which are the characteristics of the future workforce, student's readiness to work, expectations on different roles played at the tertiary education level and awareness of the latest trends. The finding of this paper concluded that the employers and academic practitioners agree that their expectations on each other’s roles are different and in order to face the rapidly changing technology era, students should not only have the right skills, but they should also have the right attitude in learning. Therefore, the authors address this issue by proposing a learning framework known as 'ASK SUMA' framework as a guideline to support the students, academicians and employers in meeting the needs of 'Industry 4.0'. Furthermore, this technology era requires the employers, academic practitioners and students to work together in order to face the upcoming challenges and fast-changing technologies. It is also suggested that an interactive system should be provided as a platform to support the three different parties to play their roles.

Keywords: attitude, expectations, industry needs, knowledge, skills

Procedia PDF Downloads 125
620 Studying the Evolution of Soot and Precursors in Turbulent Flames Using Laser Diagnostics

Authors: Muhammad A. Ashraf, Scott Steinmetz, Matthew J. Dunn, Assaad R. Masri

Abstract:

This study focuses on the evolution of soot and soot precursors in three different piloted diffusion turbulent flames. The fuel composition is as follow flame A (ethylene/nitrogen, 2:3 by volume), flame B (ethylene/air, 2:3 by volume), and flame C (pure methane). These flames are stabilized using a 4mm diameter jet surrounded by a pilot annulus with an outer diameter of 15 mm. The pilot issues combustion products from stoichiometric premixed flames of hydrogen, acetylene, and air. In all cases, the jet Reynolds number is 10,000, and air flows in the coflow stream at a velocity of 5 m/s. Time-resolved laser-induced fluorescence (LIF) is collected at two wavelength bands in the visible (445 nm) and UV regions (266 nm) along with laser-induced incandescence (LII). The combined results are employed to study concentration, size, and growth of soot and precursors. A set of four fast photo-multiplier tubes are used to record emission data in temporal domain. A 266nm laser pulse preferentially excites smaller nanoparticles which emit a fluorescence spectrum which is analysed to track the presence, evolution, and destruction of nanoparticles. A 1064nm laser pulse excites sufficiently large soot particles, and the resulting incandescence is collected at 1064nm. At downstream and outer radial locations, intermittency becomes a relevant factor. Therefore, data collected in turbulent flames is conditioned to account for intermittency so that the resulting mean profiles for scattering, fluorescence, and incandescence are shown for the events that contain traces of soot. It is found that in the upstream regions of the ethylene-air and ethylene-nitrogen flames, the presence of soot precursors is rather similar. However, further downstream, soot concentration grows larger in the ethylene-air flames.

Keywords: laser induced incandescence, laser induced fluorescence, soot, nanoparticles

Procedia PDF Downloads 146
619 Comparison between High Resolution Ultrasonography and Magnetic Resonance Imaging in Assessment of Musculoskeletal Disorders Causing Ankle Pain

Authors: Engy S. El-Kayal, Mohamed M. S. Arafa

Abstract:

There are various causes of ankle pain including traumatic and non-traumatic causes. Various imaging techniques are available for assessment of AP. MRI is considered to be the imaging modality of choice for ankle joint evaluation with an advantage of its high spatial resolution, multiplanar capability, hence its ability to visualize small complex anatomical structures around the ankle. However, the high costs and the relatively limited availability of MRI systems, as well as the relatively long duration of the examination all are considered disadvantages of MRI examination. Therefore there is a need for a more rapid and less expensive examination modality with good diagnostic accuracy to fulfill this gap. HRU has become increasingly important in the assessment of ankle disorders, with advantages of being fast, reliable, of low cost and readily available. US can visualize detailed anatomical structures and assess tendinous and ligamentous integrity. The aim of this study was to compare the diagnostic accuracy of HRU with MRI in the assessment of patients with AP. We included forty patients complaining of AP. All patients were subjected to real-time HRU and MRI of the affected ankle. Results of both techniques were compared to surgical and arthroscopic findings. All patients were examined according to a defined protocol that includes imaging the tendon tears or tendinitis, muscle tears, masses, or fluid collection, ligament sprain or tears, inflammation or fluid effusion within the joint or bursa, bone and cartilage lesions, erosions and osteophytes. Analysis of the results showed that the mean age of patients was 38 years. The study comprised of 24 women (60%) and 16 men (40%). The accuracy of HRU in detecting causes of AP was 85%, while the accuracy of MRI in the detection of causes of AP was 87.5%. In conclusions: HRU and MRI are two complementary tools of investigation with the former will be used as a primary tool of investigation and the latter will be used to confirm the diagnosis and the extent of the lesion especially when surgical interference is planned.

Keywords: ankle pain (AP), high-resolution ultrasound (HRU), magnetic resonance imaging (MRI) ultrasonography (US)

Procedia PDF Downloads 191
618 A Review of Emerging Technologies in Antennas and Phased Arrays for Avionics Systems

Authors: Muhammad Safi, Abdul Manan

Abstract:

In recent years, research in aircraft avionics systems (i.e., radars and antennas) has grown revolutionary. Aircraft technology is experiencing an increasing inclination from all mechanical to all electrical aircraft, with the introduction of inhabitant air vehicles and drone taxis over the last few years. This develops an overriding need to summarize the history, latest trends, and future development in aircraft avionics research for a better understanding and development of new technologies in the domain of avionics systems. This paper focuses on the future trends in antennas and phased arrays for avionics systems. Along with the general overview of the future avionics trend, this work describes the review of around 50 high-quality research papers on aircraft communication systems. Electric-powered aircraft have been a hot topic in the modern aircraft world. Electric aircraft have supremacy over their conventional counterparts. Due to increased drone taxi and urban air mobility, fast and reliable communication is very important, so concepts of Broadband Integrated Digital Avionics Information Exchange Networks (B-IDAIENs) and Modular Avionics are being researched for better communication of future aircraft. A Ku-band phased array antenna based on a modular design can be used in a modular avionics system. Furthermore, integrated avionics is also emerging research in future avionics. The main focus of work in future avionics will be using integrated modular avionics and infra-red phased array antennas, which are discussed in detail in this paper. Other work such as reconfigurable antennas and optical communication, are also discussed in this paper. The future of modern aircraft avionics would be based on integrated modulated avionics and small artificial intelligence-based antennas. Optical and infrared communication will also replace microwave frequencies.

Keywords: AI, avionics systems, communication, electric aircrafts, infra-red, integrated avionics, modular avionics, phased array, reconfigurable antenna, UAVs

Procedia PDF Downloads 81
617 Business Feasibility of Online Marketing of Food and Beverages Products in India

Authors: Dimpy Shah

Abstract:

The global economy has substantially changed in last three decades. Now almost all markets are transparent and visible for global customers. The corporates are now no more reliant on local markets for trade. The information technology revolution has changed business dynamics and marketing practices of corporate. The markets are divided into two different formats: traditional and virtual. In very short span of time, many e-commerce portals have captured global market. This strategy is well supported by global delivery system of multinational logistic companies. Now the markets are dealing with global supply chain networks, which are more demand driven and customer oriented. The corporate have realized importance of supply chain integration and marketing in this competitive environment. The Indian markets are also significantly affected with all these changes. In terms of population, India is in second place after China. In terms of demography, almost half of the population is of youth. It has been observed that the Indian youth are more inclined towards e-commerce and prefer to buy goods from web portal. Initially, this trend was observed in Indian service sector, textile and electronic goods and now further extended in other product categories. The FMCG companies have also recognized this change and started integration of their supply chain with e-commerce platform. This paper attempts to understand contemporary marketing practices of corporate in e-commerce business in Indian food and beverages segment and also tries to identify innovative marketing practices for proper execution of their strategies. The findings are mainly focused on supply chain re-integration and brand building strategies with proper utilization of social media.

Keywords: FMCG (Fast Moving Consumer Goods), ISCM (Integrated supply chain management), RFID (Radio Frequency Identification), traditional and virtual formats

Procedia PDF Downloads 275
616 A Sustainable Approach for Waste Management: Automotive Waste Transformation into High Value Titanium Nitride Ceramic

Authors: Mohannad Mayyas, Farshid Pahlevani, Veena Sahajwalla

Abstract:

Automotive shredder residue (ASR) is an industrial waste, generated during the recycling process of End-of-life vehicles. The large increasing production volumes of ASR and its hazardous content have raised concerns worldwide, leading some countries to impose more restrictions on ASR waste disposal and encouraging researchers to find efficient solutions for ASR processing. Although a great deal of research work has been carried out, all proposed solutions, to our knowledge, remain commercially and technically unproven. While the volume of waste materials continues to increase, the production of materials from new sustainable sources has become of great importance. Advanced ceramic materials such as nitrides, carbides and borides are widely used in a variety of applications. Among these ceramics, a great deal of attention has been recently paid to Titanium nitride (TiN) owing to its unique characteristics. In our study, we propose a new sustainable approach for ASR management where TiN nanoparticles with ideal particle size ranging from 200 to 315 nm can be synthesized as a by-product. In this approach, TiN is thermally synthesized by nitriding pressed mixture of automotive shredder residue (ASR) incorporated with titanium oxide (TiO2). Results indicated that TiO2 influences and catalyses degradation reactions of ASR and helps to achieve fast and full decomposition. In addition, the process resulted in titanium nitride (TiN) ceramic with several unique structures (porous nanostructured, polycrystalline, micro-spherical and nano-sized structures) that were simply obtained by tuning the ratio of TiO2 to ASR, and a product with appreciable TiN content of around 85% was achieved after only one hour nitridation at 1550 °C.

Keywords: automotive shredder residue, nano-ceramics, waste treatment, titanium nitride, thermal conversion

Procedia PDF Downloads 295
615 Efficient Compact Micro Dielectric Barrier Discharge (DBD) Plasma Reactor for Ozone Generation for Industrial Application in Liquid and Gas Phase Systems

Authors: D. Kuvshinov, A. Siswanto, J. Lozano-Parada, W. Zimmerman

Abstract:

Ozone is well known as a powerful fast reaction rate oxidant. The ozone based processes produce no by-product left as a non-reacted ozone returns back to the original oxygen molecule. Therefore an application of ozone is widely accepted as one of the main directions for a sustainable and clean technologies development. There are number of technologies require ozone to be delivered to specific points of a production network or reactors construction. Due to space constrains, high reactivity and short life time of ozone the use of ozone generators even of a bench top scale is practically limited. This requires development of mini/micro scale ozone generator which can be directly incorporated into production units. Our report presents a feasibility study of a new micro scale rector for ozone generation (MROG). Data on MROG calibration and indigo decomposition at different operation conditions are presented. At selected operation conditions with residence time of 0.25 s the process of ozone generation is not limited by reaction rate and the amount of ozone produced is a function of power applied. It was shown that the MROG is capable to produce ozone at voltage level starting from 3.5kV with ozone concentration of 5.28E-6 (mol/L) at 5kV. This is in line with data presented on numerical investigation for a MROG. It was shown that in compare to a conventional ozone generator, MROG has lower power consumption at low voltages and atmospheric pressure. The MROG construction makes it applicable for emerged and dry systems. With a robust compact design MROG can be used as incorporated unit for production lines of high complexity.

Keywords: dielectric barrier discharge (DBD), micro reactor, ozone, plasma

Procedia PDF Downloads 338
614 Development of an Electrochemical Aptasensor for the Detection of Human Osteopontin Protein

Authors: Sofia G. Meirinho, Luis G. Dias, António M. Peres, Lígia R. Rodrigues

Abstract:

The emerging development of electrochemical aptasen sors has enabled the easy and fast detection of protein biomarkers in standard and real samples. Biomarkers are produced by body organs or tumours and provide a measure of antigens on cell surfaces. When detected in high amounts in blood, they can be suggestive of tumour activity. These biomarkers are more often used to evaluate treatment effects or to assess the potential for metastatic disease in patients with established disease. Osteopontin (OPN) is a protein found in all body fluids and constitutes a possible biomarker because its overexpression has been related with breast cancer evolution and metastasis. Currently, biomarkers are commonly used for the development of diagnostic methods, allowing the detection of the disease in its initial stages. A previously described RNA aptamer was used in the current work to develop a simple and sensitive electrochemical aptasensor with high affinity for human OPN. The RNA aptamer was biotinylated and immobilized on a gold electrode by avidin-biotin interaction. The electrochemical signal generated from the aptamer–target molecule interaction was monitored electrochemically using cyclic voltammetry in the presence of [Fe (CN) 6]−3/− as a redox probe. The signal observed showed a current decrease due to the binding of OPN. The preliminary results showed that this aptasensor enables the detection of OPN in standard solutions, showing good selectivity towards the target in the presence of others interfering proteins such as bovine OPN and bovine serum albumin. The results gathered in the current work suggest that the proposed electrochemical aptasensor is a simple and sensitive detection tool for human OPN and so, may have future applications in cancer disease monitoring.

Keywords: osteopontin, aptamer, aptasensor, screen-printed electrode, cyclic voltammetry

Procedia PDF Downloads 431
613 Analytical Slope Stability Analysis Based on the Statistical Characterization of Soil Shear Strength

Authors: Bernardo C. P. Albuquerque, Darym J. F. Campos

Abstract:

Increasing our ability to solve complex engineering problems is directly related to the processing capacity of computers. By means of such equipments, one is able to fast and accurately run numerical algorithms. Besides the increasing interest in numerical simulations, probabilistic approaches are also of great importance. This way, statistical tools have shown their relevance to the modelling of practical engineering problems. In general, statistical approaches to such problems consider that the random variables involved follow a normal distribution. This assumption tends to provide incorrect results when skew data is present since normal distributions are symmetric about their means. Thus, in order to visualize and quantify this aspect, 9 statistical distributions (symmetric and skew) have been considered to model a hypothetical slope stability problem. The data modeled is the friction angle of a superficial soil in Brasilia, Brazil. Despite the apparent universality, the normal distribution did not qualify as the best fit. In the present effort, data obtained in consolidated-drained triaxial tests and saturated direct shear tests have been modeled and used to analytically derive the probability density function (PDF) of the safety factor of a hypothetical slope based on Mohr-Coulomb rupture criterion. Therefore, based on this analysis, it is possible to explicitly derive the failure probability considering the friction angle as a random variable. Furthermore, it is possible to compare the stability analysis when the friction angle is modelled as a Dagum distribution (distribution that presented the best fit to the histogram) and as a Normal distribution. This comparison leads to relevant differences when analyzed in light of the risk management.

Keywords: statistical slope stability analysis, skew distributions, probability of failure, functions of random variables

Procedia PDF Downloads 338
612 Luminescent Functionalized Graphene Oxide Based Sensitive Detection of Deadly Explosive TNP

Authors: Diptiman Dinda, Shyamal Kumar Saha

Abstract:

In the 21st century, sensitive and selective detection of trace amounts of explosives has become a serious problem. Generally, nitro compound and its derivatives are being used worldwide to prepare different explosives. Recently, TNP (2, 4, 6 trinitrophenol) is the most commonly used constituent to prepare powerful explosives all over the world. It is even powerful than TNT or RDX. As explosives are electron deficient in nature, it is very difficult to detect one separately from a mixture. Again, due to its tremendous water solubility, detection of TNP in presence of other explosives from water is very challenging. Simple instrumentation, cost-effective, fast and high sensitivity make fluorescence based optical sensing a grand success compared to other techniques. Graphene oxide (GO), with large no of epoxy grps, incorporate localized nonradiative electron-hole centres on its surface to give very weak fluorescence. In this work, GO is functionalized with 2, 6-diamino pyridine to remove those epoxy grps. through SN2 reaction. This makes GO into a bright blue luminescent fluorophore (DAP/rGO) which shows an intense PL spectrum at ∼384 nm when excited at 309 nm wavelength. We have also characterized the material by FTIR, XPS, UV, XRD and Raman measurements. Using this as fluorophore, a large fluorescence quenching (96%) is observed after addition of only 200 µL of 1 mM TNP in water solution. Other nitro explosives give very moderate PL quenching compared to TNP. Such high selectivity is related to the operation of FRET mechanism from fluorophore to TNP during this PL quenching experiment. TCSPC measurement also reveals that the lifetime of DAP/rGO drastically decreases from 3.7 to 1.9 ns after addition of TNP. Our material is also quite sensitive to 125 ppb level of TNP. Finally, we believe that this graphene based luminescent material will emerge a new class of sensing materials to detect trace amounts of explosives from aqueous solution.

Keywords: graphene, functionalization, fluorescence quenching, FRET, nitroexplosive detection

Procedia PDF Downloads 440
611 Chronolgy and Developments in Inventory Control Best Practices for FMCG Sector

Authors: Roopa Singh, Anurag Singh, Ajay

Abstract:

Agriculture contributes a major share in the national economy of India. A major portion of Indian economy (about 70%) depends upon agriculture as it forms the main source of income. About 43% of India’s geographical area is used for agricultural activity which involves 65-75% of total population of India. The given work deals with the Fast moving Consumer Goods (FMCG) industries and their inventories which use agricultural produce as their raw material or input for their final product. Since the beginning of inventory practices, many developments took place which can be categorised into three phases, based on the review of various works. The first phase is related with development and utilization of Economic Order Quantity (EOQ) model and methods for optimizing costs and profits. Second phase deals with inventory optimization method, with the purpose of balancing capital investment constraints and service level goals. The third and recent phase has merged inventory control with electrical control theory. Maintenance of inventory is considered negative, as a large amount of capital is blocked especially in mechanical and electrical industries. But the case is different in food processing and agro-based industries and their inventories due to cyclic variation in the cost of raw materials of such industries which is the reason for selection of these industries in the mentioned work. The application of electrical control theory in inventory control makes the decision-making highly instantaneous for FMCG industries without loss in their proposed profits, which happened earlier during first and second phases, mainly due to late implementation of decision. The work also replaces various inventories and work-in-progress (WIP) related errors with their monetary values, so that the decision-making is fully target-oriented.

Keywords: control theory, inventory control, manufacturing sector, EOQ, feedback, FMCG sector

Procedia PDF Downloads 353
610 Alternating Expectation-Maximization Algorithm for a Bilinear Model in Isoform Quantification from RNA-Seq Data

Authors: Wenjiang Deng, Tian Mou, Yudi Pawitan, Trung Nghia Vu

Abstract:

Estimation of isoform-level gene expression from RNA-seq data depends on simplifying assumptions, such as uniform reads distribution, that are easily violated in real data. Such violations typically lead to biased estimates. Most existing methods provide a bias correction step(s), which is based on biological considerations, such as GC content–and applied in single samples separately. The main problem is that not all biases are known. For example, new technologies such as single-cell RNA-seq (scRNA-seq) may introduce new sources of bias not seen in bulk-cell data. This study introduces a method called XAEM based on a more flexible and robust statistical model. Existing methods are essentially based on a linear model Xβ, where the design matrix X is known and derived based on the simplifying assumptions. In contrast, XAEM considers Xβ as a bilinear model with both X and β unknown. Joint estimation of X and β is made possible by simultaneous analysis of multi-sample RNA-seq data. Compared to existing methods, XAEM automatically performs empirical correction of potentially unknown biases. XAEM implements an alternating expectation-maximization (AEM) algorithm, alternating between estimation of X and β. For speed XAEM utilizes quasi-mapping for read alignment, thus leading to a fast algorithm. Overall XAEM performs favorably compared to other recent advanced methods. For simulated datasets, XAEM obtains higher accuracy for multiple-isoform genes, particularly for paralogs. In a differential-expression analysis of a real scRNA-seq dataset, XAEM achieves substantially greater rediscovery rates in an independent validation set.

Keywords: alternating EM algorithm, bias correction, bilinear model, gene expression, RNA-seq

Procedia PDF Downloads 142
609 Preparation, Characterisation, and Measurement of the in vitro Cytotoxicity of Mesoporous Silica Nanoparticles Loaded with Cytotoxic Pt(II) Oxadiazoline Complexes

Authors: G. Wagner, R. Herrmann

Abstract:

Cytotoxic platinum compounds play a major role in the chemotherapy of a large number of human cancers. However, due to the severe side effects for the patient and other problems associated with their use, there is a need for the development of more efficient drugs and new methods for their selective delivery to the tumours. One way to achieve the latter could be in the use of nanoparticular substrates that can adsorb or chemically bind the drug. In the cell, the drug is supposed to be slowly released, either by physical desorption or by dissolution of the particle framework. Ideally, the cytotoxic properties of the platinum drug unfold only then, in the cancer cell and over a longer period of time due to the gradual release. In this paper, we report on our first steps in this direction. The binding properties of a series of cytotoxic Pt(II) oxadiazoline compounds to mesoporous silica particles has been studied by NMR and UV/vis spectroscopy. High loadings were achieved when the Pt(II) compound was relatively polar, and has been dissolved in a relatively nonpolar solvent before the silica was added. Typically, 6-10 hours were required for complete equilibration, suggesting the adsorption did not only occur to the outer surface but also to the interior of the pores. The untreated and Pt(II) loaded particles were characterised by C, H, N combustion analysis, BET/BJH nitrogen sorption, electron microscopy (REM and TEM) and EDX. With the latter methods we were able to demonstrate the homogenous distribution of the Pt(II) compound on and in the silica particles, and no Pt(II) bulk precipitate had formed. The in vitro cytotoxicity in a human cancer cell line (HeLa) has been determined for one of the new platinum compounds adsorbed to mesoporous silica particles of different size, and compared with the corresponding compound in solution. The IC50 data are similar in all cases, suggesting that the release of the Pt(II) compound was relatively fast and possibly occurred before the particles reached the cells. Overall, the platinum drug is chemically stable on silica and retained its activity upon prolonged storage.

Keywords: cytotoxicity, mesoporous silica, nanoparticles, platinum compounds

Procedia PDF Downloads 321
608 The Effectiveness of Cash Flow Management by SMEs in the Mafikeng Local Municipality of South Africa

Authors: Ateba Benedict Belobo, Faan Pelser, Ambe Marcus

Abstract:

Aims: This study arise from repeated complaints from both electronic mails about the underperformance of Mafikeng Small and Medium-Size enterprises after the global financial crisis. The authors were on the view that, this poor performance experienced could be as a result of the negative effects on the cash flow of these businesses due to volatilities in the business environment in general prior to the global crisis. Thus, the paper was mainly aimed at determining the shortcomings experienced by these SMEs with regards to cash flow management. It was also aimed at suggesting possible measures to improve cash flow management of these SMEs in this tough time. Methods: A case study was conducted on 3 beverage suppliers, 27 bottle stores, 3 largest fast consumer goods super markets and 7 automobiles enterprises in the Mafikeng local municipality. A mixed method research design was employed and a purposive sampling was used in selecting SMEs that participated. Views and experiences of participants of the paper were captured through in-depth interviews. Data from the empirical investigation were interpreted using open coding and a simple percentage formula. Results: Findings from the empirical research reflected that majority of Mafikeng SMEs suffer poor operational performance prior to the global financial crisis primarily as a result of poor cash flow management. However, the empirical outcome also indicted other secondary factors contributing to this poor operational performance. Conclusion: Finally, the authorsproposed possible measures that could be used to improve cash flow management and to solve other factors affecting operational performance of SMEs in the Mafikeng local municipality in other to achieve a better business performance.

Keywords: cash flow, business performance, global financial crisis, SMEs

Procedia PDF Downloads 439
607 Cybersecurity Engineering BS Degree Curricula Design Framework and Assessment

Authors: Atma Sahu

Abstract:

After 9/11, there will only be cyberwars. The cyberwars increase in intensity the country's cybersecurity workforce's hiring and retention issues. Currently, many organizations have unfilled cybersecurity positions, and to a lesser degree, their cybersecurity teams are understaffed. Therefore, there is a critical need to develop a new program to help meet the market demand for cybersecurity engineers (CYSE) and personnel. Coppin State University in the United States was responsible for developing a cybersecurity engineering BS degree program. The CYSE curriculum design methodology consisted of three parts. First, the ACM Cross-Cutting Concepts standard's pervasive framework helped curriculum designers and students explore connections among the core courses' knowledge areas and reinforce the security mindset conveyed in them. Second, the core course context was created to assist students in resolving security issues in authentic cyber situations involving cyber security systems in various aspects of industrial work while adhering to the NIST standards framework. The last part of the CYSE curriculum design aspect was the institutional student learning outcomes (SLOs) integrated and aligned in content courses, representing more detailed outcomes and emphasizing what learners can do over merely what they know. The CYSE program's core courses express competencies and learning outcomes using action verbs from Bloom's Revised Taxonomy. This aspect of the CYSE BS degree program's design is based on these three pillars: the ACM, NIST, and SLO standards, which all CYSE curriculum designers should know. This unique CYSE curriculum design methodology will address how students and the CYSE program will be assessed and evaluated. It is also critical that educators, program managers, and students understand the importance of staying current in this fast-paced CYSE field.

Keywords: cyber security, cybersecurity engineering, systems engineering, NIST standards, physical systems

Procedia PDF Downloads 95
606 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer

Procedia PDF Downloads 136
605 Exploring the Relationship between Organisational Identity and Value Systems: Reflecting on the Values-Crafting Process in a Multi-National Organisation within the Entertainment Industry

Authors: Dieter Veldsman, Theo Heyns Veldsman

Abstract:

The knowledge economy demands an organisation that is flexible, adaptable and able to navigate the ever-changing environment. This fast-paced environment has however resulted in an organizational landscape that battles to engage employees, retain top talent and create meaningful work for its members. In the knowledge economy, the concept of organizational identity has become an important consideration as organisations aim to create a compelling and inviting narrative for all stakeholders across the business value chain. Values are often seen as the behavioural framework that informs organisational culture, yet often values are perceived to be inauthentic and misaligned with the true character or identity of the organisation and how it is perceived by different role players. This paper focuses on exploring the relationship between organisational identity and value systems by focusing on a case study within a multi-national organisation within South Africa. The paper evaluates the implementation of mixed methods OD approach that gathered collaborative inputs of more than 4500 employees who participated in crafting the newly established values system post a retrenchment process. The paper will evaluate the relationship between the newly crafted value system and the identity of the organisation as described by various internal and external stakeholders in order to explore potential alignment, dissonance and key insights into understanding the relationship between organisational identity and values. The case study will be reported from the perspective of an OD consultant who supported the transformation process over a period of 8 months and aims to provide key insights into values and identity alignment within knowledge economy organisations. From a practical perspective, the paper provides insights into how values are created, perceived and lived within organisations and the impact on employee engagement and culture.

Keywords: culture, organisational development, organisational identity, values

Procedia PDF Downloads 311
604 Brain-Computer Interface System for Lower Extremity Rehabilitation of Chronic Stroke Patients

Authors: Marc Sebastián-Romagosa, Woosang Cho, Rupert Ortner, Christy Li, Christoph Guger

Abstract:

Neurorehabilitation based on Brain-Computer Interfaces (BCIs) shows important rehabilitation effects for patients after stroke. Previous studies have shown improvements for patients that are in a chronic stage and/or have severe hemiparesis and are particularly challenging for conventional rehabilitation techniques. For this publication, seven stroke patients in the chronic phase with hemiparesis in the lower extremity were recruited. All of them participated in 25 BCI sessions about 3 times a week. The BCI system was based on the Motor Imagery (MI) of the paretic ankle dorsiflexion and healthy wrist dorsiflexion with Functional Electrical Stimulation (FES) and avatar feedback. Assessments were conducted to assess the changes in motor improvement before, after and during the rehabilitation training. Our primary measures used for the assessment were the 10-meters walking test (10MWT), Range of Motion (ROM) of the ankle dorsiflexion and Timed Up and Go (TUG). Results show a significant increase in the gait speed in the primary measure 10MWT fast velocity of 0.18 m/s IQR = [0.12 to 0.2], P = 0.016. The speed in the TUG was also significantly increased by 0.1 m/s IQR = [0.09 to 0.11], P = 0.031. The active ROM assessment increased 4.65º, and IQR = [ 1.67 - 7.4], after rehabilitation training, P = 0.029. These functional improvements persisted at least one month after the end of the therapy. These outcomes show the feasibility of this BCI approach for chronic stroke patients and further support the growing consensus that these types of tools might develop into a new paradigm for rehabilitation tools for stroke patients. However, the results are from only seven chronic stroke patients, so the authors believe that this approach should be further validated in broader randomized controlled studies involving more patients. MI and FES-based non-invasive BCIs are showing improvement in the gait rehabilitation of patients in the chronic stage after stroke. This could have an impact on the rehabilitation techniques used for these patients, especially when they are severely impaired and their mobility is limited.

Keywords: neuroscience, brain computer interfaces, rehabilitat, stroke

Procedia PDF Downloads 92
603 Indigenous Hair Treatment in Abyssinia

Authors: Makda Yeshitela Kifele

Abstract:

Hair treatment prevents the hair from loss of volume, changing colour, and damaging its properties of the hair. Hair is the beauty of human beings that makes people beautiful and takes the other hearts to see them and to give them an appreciation for their effort to treat their hair and save it from damage. There are different methods to protect human hair from loss and damage that influence human psychology better than the problems. Chemicals products are available in the world that keeps safely the hair and provide beauty for the hair. But chemical products have side effects and are not cost-effective. Even some of the chemicals are allergic for users and left some changes in the hair. Indigenous hair treatment is an effective method that reduces the bad effects and the problems of the chemical that are lefts in human being’slife. Indigenous hair treatment can treat the hair safely and effectively that does not have much effect or spots in the human hair the users rather, it improves some attributes of the hair such that shine, quality, quantity improvements, length, and flexibility can be modified by these indigenous treatments. Rate is the local plant that plays a significant role in hair treatment. Rate is the local plant that can be available everywhere in the country, and anybody can be used for hair treatments. For this research, 50 women are identified as sample populations with different hair characteristics. The treatments were collected from the fields and squeezed into the pots to be prepared as specimens. The squeezed plants were deposited in the refrigerator for three days with some amounts of salts to prevent some bacteria. Chemical analysis has been done to sort out some detrimental substances. So the result showed that there are no detrimental substances that affect the hair properties and the health of the users. The sample population used the oil for one month without any other oily cosmetics that disturbs the treatment. The output is very effective and brings shining the hair, preventing greying of the hair, showing fast-growing, increasing the volume of the hair, and becoming flexible and curly, straight hair, thicker, and with no allergic effects.

Keywords: indigenous, chemicals, curly, treatment

Procedia PDF Downloads 108
602 Effect of Cardio-Specific Overexpression of MUL1, a Mitochondrial Protein on Myocardial Function

Authors: Ximena Calle, Plinio Cantero-López, Felipe Muñoz-Córdova, Mayarling-Francisca Troncoso, Sergio Lavandero, Valentina Parra

Abstract:

MUL1, a mitochondrial E3 ubiquitin ligase anchored to the outer mitochondrial membrane, is highly expressed in the heart. MUL1 is involved in multiple biological pathways associated with mitochondrial dynamics. Increased MUL1 affects the balance between fission and fusion, affecting mitochondrial function, which plays a crucial role in myocardial function. Therefore, it is interesting to evaluate the effect of cardiac-specific overexpression of MUL1 on myocardial function. Aim: To determine heart functionality in a mouse model with cardio-specific overexpression MUL1 protein. Methods and Results: Male C57BL/Tg transgenic mice with cardiomyocyte-specific overexpression of MUL1 (n=10) and control (n=4) were evaluated at 12, 27, and 35 weeks of age. Glucose tolerance curve determination was performed after a 6-hours fast to assess metabolic capacity, treadmill test, and systolic, and diastolic pressure was evaluated by the mouse tail-cuff blood pressure system equipment. The result showed no glucose tolerance curve, and the treadmill test demonstrated no significant changes between groups. However, substantial changes in diastolic function were observed by ultrasound and determination of cardiac hypertrophy proteins by western blot. Conclusions: Cardio-specific overexpression of MUL1 in mice without any treatment affects diastolic cardiac function, thus showing the important role contributed by MUL1 in the heart. Future research should evaluate the effect of cardiomyocyte-specific overexpression of MUL1 in pathological conditions such as a high-fat diet is one of the main risk factors for cardiovascular disease.

Keywords: diastolic dysfunction, hypertrophy cardiac, mitochondrial E3 ubiquitin ligase 1, MUL1

Procedia PDF Downloads 74
601 A User Interface for Easiest Way Image Encryption with Chaos

Authors: D. López-Mancilla, J. M. Roblero-Villa

Abstract:

Since 1990, the research on chaotic dynamics has received considerable attention, particularly in light of potential applications of this phenomenon in secure communications. Data encryption using chaotic systems was reported in the 90's as a new approach for signal encoding that differs from the conventional methods that use numerical algorithms as the encryption key. The algorithms for image encryption have received a lot of attention because of the need to find security on image transmission in real time over the internet and wireless networks. Known algorithms for image encryption, like the standard of data encryption (DES), have the drawback of low level of efficiency when the image is large. The encrypting based on chaos proposes a new and efficient way to get a fast and highly secure image encryption. In this work, a user interface for image encryption and a novel and easiest way to encrypt images using chaos are presented. The main idea is to reshape any image into a n-dimensional vector and combine it with vector extracted from a chaotic system, in such a way that the vector image can be hidden within the chaotic vector. Once this is done, an array is formed with the original dimensions of the image and turns again. An analysis of the security of encryption from the images using statistical analysis is made and is used a stage of optimization for image encryption security and, at the same time, the image can be accurately recovered. The user interface uses the algorithms designed for the encryption of images, allowing you to read an image from the hard drive or another external device. The user interface, encrypt the image allowing three modes of encryption. These modes are given by three different chaotic systems that the user can choose. Once encrypted image, is possible to observe the safety analysis and save it on the hard disk. The main results of this study show that this simple method of encryption, using the optimization stage, allows an encryption security, competitive with complicated encryption methods used in other works. In addition, the user interface allows encrypting image with chaos, and to submit it through any public communication channel, including internet.

Keywords: image encryption, chaos, secure communications, user interface

Procedia PDF Downloads 490
600 Combining in vitro Protein Expression with AlphaLISA Technology to Study Protein-Protein Interaction

Authors: Shayli Varasteh Moradi, Wayne A. Johnston, Dejan Gagoski, Kirill Alexandrov

Abstract:

The demand for a rapid and more efficient technique to identify protein-protein interaction particularly in the areas of therapeutics and diagnostics development is growing. The method described here is a rapid in vitro protein-protein interaction analysis approach based on AlphaLISA technology combined with Leishmania tarentolae cell-free protein production (LTE) system. Cell-free protein synthesis allows the rapid production of recombinant proteins in a multiplexed format. Among available in vitro expression systems, LTE offers several advantages over other eukaryotic cell-free systems. It is based on a fast growing fermentable organism that is inexpensive in cultivation and lysate production. High integrity of proteins produced in this system and the ability to co-express multiple proteins makes it a desirable method for screening protein interactions. Following the translation of protein pairs in LTE system, the physical interaction between proteins of interests is analysed by AlphaLISA assay. The assay is performed using unpurified in vitro translation reaction and therefore can be readily multiplexed. This approach can be used in various research applications such as epitope mapping, antigen-antibody analysis and protein interaction network mapping. The intra-viral protein interaction network of Zika virus was studied using the developed technique. The viral proteins were co-expressed pair-wise in LTE and all possible interactions among viral proteins were tested using AlphaLISA. The assay resulted to the identification of 54 intra-viral protein-protein interactions from which 19 binary interactions were found to be novel. The presented technique provides a powerful tool for rapid analysis of protein-protein interaction with high sensitivity and throughput.

Keywords: AlphaLISA technology, cell-free protein expression, epitope mapping, Leishmania tarentolae, protein-protein interaction

Procedia PDF Downloads 237
599 dynr.mi: An R Program for Multiple Imputation in Dynamic Modeling

Authors: Yanling Li, Linying Ji, Zita Oravecz, Timothy R. Brick, Michael D. Hunter, Sy-Miin Chow

Abstract:

Assessing several individuals intensively over time yields intensive longitudinal data (ILD). Even though ILD provide rich information, they also bring other data analytic challenges. One of these is the increased occurrence of missingness with increased study length, possibly under non-ignorable missingness scenarios. Multiple imputation (MI) handles missing data by creating several imputed data sets, and pooling the estimation results across imputed data sets to yield final estimates for inferential purposes. In this article, we introduce dynr.mi(), a function in the R package, Dynamic Modeling in R (dynr). The package dynr provides a suite of fast and accessible functions for estimating and visualizing the results from fitting linear and nonlinear dynamic systems models in discrete as well as continuous time. By integrating the estimation functions in dynr and the MI procedures available from the R package, Multivariate Imputation by Chained Equations (MICE), the dynr.mi() routine is designed to handle possibly non-ignorable missingness in the dependent variables and/or covariates in a user-specified dynamic systems model via MI, with convergence diagnostic check. We utilized dynr.mi() to examine, in the context of a vector autoregressive model, the relationships among individuals’ ambulatory physiological measures, and self-report affect valence and arousal. The results from MI were compared to those from listwise deletion of entries with missingness in the covariates. When we determined the number of iterations based on the convergence diagnostics available from dynr.mi(), differences in the statistical significance of the covariate parameters were observed between the listwise deletion and MI approaches. These results underscore the importance of considering diagnostic information in the implementation of MI procedures.

Keywords: dynamic modeling, missing data, mobility, multiple imputation

Procedia PDF Downloads 164
598 Scientific Expedition to Understand the Crucial Issues of Rapid Lake Expansion and Moraine Dam Instability Phenomena to Justify the Lake Lowering Effort of Imja Lake, Khumbu Region of Sagarmatha, Nepal

Authors: R. C. Tiwari, N. P. Bhandary, D. B. Thapa Chhetri, R. Yatabe

Abstract:

The research enlightens the various issues of lake expansion and stability of the moraine dam of Imja lake. The Imja lake considered that the world highest altitude lake (5010m from m.s.l.), located in the Khumbu, Sagarmatha region of Nepal (27.90 N and 86.90 E) was reported as one of the fast growing glacier lakes in the Nepal Himalaya. The research explores a common phenomenon of lake expansion and stability issues of moraine dam to justify the necessity of lake lowering efforts if any in future in other glacier lakes in Nepal Himalaya. For this, we have explored the root causes of rapid lake expansion along with crucial factors responsible for the stability of moraine mass. This research helps to understand the structure of moraine dam and the ice, water and moraine interactions to the strength of moraine dam. The nature of permafrost layer and its effects on moraine dam stability is also studied here. The detail Geo-Technical properties of moraine mass of Imja lake gives a clear picture of the strength of the moraine material and their interactions. The stability analysis of the moraine dam under the consideration of strong ground motion of 7.8Mw 2015 Barpak-Gorkha and its major aftershock 7.3Mw Kodari, Sindhupalchowk-Dolakha border, Nepal earthquakes have also been carried out here to understand the necessity of lake lowering efforts. The lake lowering effort was recently done by Nepal Army by constructing an open channel and lowered 3m. And, it is believed that the entire region is now safe due to continuous draining of lake water by 3m. But, this option does not seem adequate to offer a significant risk reduction to downstream communities in this much amount of volume and depth, lowering as in the 75 million cubic meter water impounded with an average depth of 148.9m.

Keywords: finite element method, glacier, moraine, stability

Procedia PDF Downloads 213
597 Chromatographic Preparation and Performance on Zinc Ion Imprinted Monolithic Column and Its Adsorption Property

Authors: X. Han, S. Duan, C. Liu, C. Zhou, W. Zhu, L. Kong

Abstract:

The ionic imprinting technique refers to the three-dimensional rigid structure with the fixed pore sizes, which was formed by the binding interactions of ions and functional monomers and used ions as the template, it has a high level of recognition to the ionic template. The preparation of monolithic column by the in-situ polymerization need to put the compound of template, functional monomers, cross-linking agent and initiating agent into the solution, dissolve it and inject to the column tube, and then the compound will have a polymerization reaction at a certain temperature, after the synthetic reaction, we washed out the unread template and solution. The monolithic columns are easy to prepare, low consumption and cost-effective with fast mass transfer, besides, they have many chemical functions. But the monolithic columns have some problems in the practical application, such as low-efficiency, quantitative analysis cannot be performed accurately because of the peak shape is wide and has tailing phenomena; the choice of polymerization systems is limited and the lack of theoretical foundations. Thus the optimization of components and preparation methods is an important research direction. During the preparation of ionic imprinted monolithic columns, pore-forming agent can make the polymer generate the porous structure, which can influence the physical properties of polymer, what’ s more, it can directly decide the stability and selectivity of polymerization reaction. The compounds generated in the pre-polymerization reaction could directly decide the identification and screening capabilities of imprinted polymer; thus the choice of pore-forming agent is quite critical in the preparation of imprinted monolithic columns. This article mainly focuses on the research that when using different pore-forming agents, the impact of zinc ion imprinted monolithic column on the enrichment performance of zinc ion.

Keywords: high performance liquid chromatography (HPLC), ionic imprinting, monolithic column, pore-forming agent

Procedia PDF Downloads 214