Search results for: automated container terminal
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1367

Search results for: automated container terminal

1127 Definition of Aerodynamic Coefficients for Microgravity Unmanned Aerial System

Authors: Gamaliel Salazar, Adriana Chazaro, Oscar Madrigal

Abstract:

The evolution of Unmanned Aerial Systems (UAS) has made it possible to develop new vehicles capable to perform microgravity experiments which due its cost and complexity were beyond the reach for many institutions. In this study, the aerodynamic behavior of an UAS is studied through its deceleration stage after an initial free fall phase (where the microgravity effect is generated) using Computational Fluid Dynamics (CFD). Due to the fact that the payload would be analyzed under a microgravity environment and the nature of the payload itself, the speed of the UAS must be reduced in a smoothly way. Moreover, the terminal speed of the vehicle should be low enough to preserve the integrity of the payload and vehicle during the landing stage. The UAS model is made by a study pod, control surfaces with fixed and mobile sections, landing gear and two semicircular wing sections. The speed of the vehicle is decreased by increasing the angle of attack (AoA) of each wing section from 2° (where the airfoil S1091 has its greatest aerodynamic efficiency) to 80°, creating a circular wing geometry. Drag coefficients (Cd) and forces (Fd) are obtained employing CFD analysis. A simplified 3D model of the vehicle is analyzed using Ansys Workbench 16. The distance between the object of study and the walls of the control volume is eight times the length of the vehicle. The domain is discretized using an unstructured mesh based on tetrahedral elements. The refinement of the mesh is made by defining an element size of 0.004 m in the wing and control surfaces in order to figure out the fluid behavior in the most important zones, as well as accurate approximations of the Cd. The turbulent model k-epsilon is selected to solve the governing equations of the fluids while a couple of monitors are placed in both wing and all-body vehicle to visualize the variation of the coefficients along the simulation process. Employing a statistical approximation response surface methodology the case of study is parametrized considering the AoA of the wing as the input parameter and Cd and Fd as output parameters. Based on a Central Composite Design (CCD), the Design Points (DP) are generated so the Cd and Fd for each DP could be estimated. Applying a 2nd degree polynomial approximation the drag coefficients for every AoA were determined. Using this values, the terminal speed at each position is calculated considering a specific Cd. Additionally, the distance required to reach the terminal velocity at each AoA is calculated, so the minimum distance for the entire deceleration stage without comprising the payload could be determine. The Cd max of the vehicle is 1.18, so its maximum drag will be almost like the drag generated by a parachute. This guarantees that aerodynamically the vehicle can be braked, so it could be utilized for several missions allowing repeatability of microgravity experiments.

Keywords: microgravity effect, response surface, terminal speed, unmanned system

Procedia PDF Downloads 153
1126 Comparative Study of Skeletonization and Radial Distance Methods for Automated Finger Enumeration

Authors: Mohammad Hossain Mohammadi, Saif Al Ameri, Sana Ziaei, Jinane Mounsef

Abstract:

Automated enumeration of the number of hand fingers is widely used in several motion gaming and distance control applications, and is discussed in several published papers as a starting block for hand recognition systems. The automated finger enumeration technique should not only be accurate, but also must have a fast response for a moving-picture input. The high performance of video in motion games or distance control will inhibit the program’s overall speed, for image processing software such as Matlab need to produce results at high computation speeds. Since an automated finger enumeration with minimum error and processing time is desired, a comparative study between two finger enumeration techniques is presented and analyzed in this paper. In the pre-processing stage, various image processing functions were applied on a real-time video input to obtain the final cleaned auto-cropped image of the hand to be used for the two techniques. The first technique uses the known morphological tool of skeletonization to count the number of skeleton’s endpoints for fingers. The second technique uses a radial distance method to enumerate the number of fingers in order to obtain a one dimensional hand representation. For both discussed methods, the different steps of the algorithms are explained. Then, a comparative study analyzes the accuracy and speed of both techniques. Through experimental testing in different background conditions, it was observed that the radial distance method was more accurate and responsive to a real-time video input compared to the skeletonization method. All test results were generated in Matlab and were based on displaying a human hand for three different orientations on top of a plain color background. Finally, the limitations surrounding the enumeration techniques are presented.

Keywords: comparative study, hand recognition, fingertip detection, skeletonization, radial distance, Matlab

Procedia PDF Downloads 363
1125 Pill-Box Dispenser as a Strategy for Therapeutic Management: A Qualitative Evaluation

Authors: Bruno R. Mendes, Francisco J. Caldeira, Rita S. Luís

Abstract:

Population ageing is directly correlated to an increase in medicine consumption. Beyond the latter and the polymedicated profile of elderly, it is possible to see a need for pharmacotherapeutic monitoring due to cognitive and physical impairment. In this sense, the tracking, organization and administration of medicines become a daily challenge and the pill-box dispenser system a solution. The pill-box dispenser (system) consists in a small compartmentalized container to unit dose organization, which means a container able to correlate the patient’s prescribed dose regimen and the time schedule of intake. In many European countries, this system is part of pharmacist’s role in clinical pharmacy. Despite this simple solution, therapy compliance is only possible if the patient adheres to the system, so it is important to establish a qualitative and quantitative analysis on the perception of the patient on the benefits and risks of the pill-box dispenser as well as the identification of the ideal system. The analysis was conducted through an observational study, based on the application of a standardized questionnaire structured with the numerical scale of Likert (5 levels) and previously validated on the population. The study was performed during a limited period of time and under a randomized sample of 188 participants. The questionnaire consisted of 22 questions: 6 background measures and 16 specific measures. The standards for the final comparative analysis were obtained through the state-of-the-art on the subject. The study carried out using the Likert scale afforded a degree of agreement and discordance between measures (Sample vs. Standard) of 56,25% and 43,75%, respectively. It was concluded that the pill-box dispenser has greater acceptance among a younger population, that was not the initial target of the system. However, this allows us to guarantee a high adherence in the future. Additionally, it was noted that the cost associated with this service is not a limiting factor for its use. The pill-box dispenser system, as currently implemented, demonstrates an important weakness regarding the quality and effectiveness of the medicines, which is not understood by the patient, revealing a significant lack of literacy when it concerns with medicine area. The characteristics of an ideal system remain unchanged, which means that the size, appearance and availability of information in the pill-box continue to be indispensable elements for the compliance with the system. The pill-box dispenser remains unsuitable regarding container size and the type of treatment to which it applies. Despite that, it might be a future standard for clinical pharmacy, allowing a differentiation of the pharmacist role, as well as a wider range of applications to other age groups and treatments.

Keywords: clinical pharmacy, medicines, patient safety, pill-box dispenser

Procedia PDF Downloads 176
1124 Optimal Location of the I/O Point in the Parking System

Authors: Jing Zhang, Jie Chen

Abstract:

In this paper, we deal with the optimal I/O point location in an automated parking system. In this system, the S/R machine (storage and retrieve machine) travels independently in vertical and horizontal directions. Based on the characteristics of the parking system and the basic principle of AS/RS system (Automated Storage and Retrieval System), we obtain the continuous model in units of time. For the single command cycle using the randomized storage policy, we calculate the probability density function for the system travel time and thus we develop the travel time model. And we confirm that the travel time model shows a good performance by comparing with discrete case. Finally in this part, we establish the optimal model by minimizing the expected travel time model and it is shown that the optimal location of the I/O point is located at the middle of the left-hand above corner.

Keywords: parking system, optimal location, response time, S/R machine

Procedia PDF Downloads 391
1123 Electro-Thermo-Mechanical Behaviour of Functionally Graded Material Usage in Lead Acid Storage Batteries and the Benefits

Authors: Sandeep Das

Abstract:

Terminal post is one of the most important features of a Battery. The design and manufacturing of post are very much critical especially when threaded inserts (Bolt-on type) are used since all the collected energy is delivered from the lead part to the threaded insert (Cu or Cu alloy). Any imperfection at the interface may cause Voltage drop, high resistance, high heat generation, etc. This may be because of sudden change of material properties from lead to Cu alloys. To avoid this problem, a scheme of material gradation is proposed for achieving continuous variation of material properties for the Post used in commercially available lead acid battery. The Functionally graded (FG) material for the post is considered to be composed of different layers of homogeneous material. The volume fraction of the materials used corresponding to each layer is calculated by considering its variation along the direction of current flow (z) according to a power law. Accordingly, the effective properties of the homogeneous layers are estimated and the Post composed of this FG material is modeled using the commercially available ANSYS software. The solid 186 layered structural solid element has been used for discretization of the model of the FG Post. A thermal electric analysis is performed on the layered FG model. The model developed has been validated by comparing the results of the existing Post model& experimental analysis

Keywords: ANSYS, functionally graded material, lead-acid battery, terminal post

Procedia PDF Downloads 114
1122 Quantifying Automation in the Architectural Design Process via a Framework Based on Task Breakdown Systems and Recursive Analysis: An Exploratory Study

Authors: D. M. Samartsev, A. G. Copping

Abstract:

As with all industries, architects are using increasing amounts of automation within practice, with approaches such as generative design and use of AI becoming more commonplace. However, the discourse on the rate at which the architectural design process is being automated is often personal and lacking in objective figures and measurements. This results in confusion between people and barriers to effective discourse on the subject, in turn limiting the ability of architects, policy makers, and members of the public in making informed decisions in the area of design automation. This paper proposes the use of a framework to quantify the progress of automation within the design process. The use of a reductionist analysis of the design process allows it to be quantified in a manner that enables direct comparison across different times, as well as locations and projects. The methodology is informed by the design of this framework – taking on the aspects of a systematic review but compressed in time to allow for an initial set of data to verify the validity of the framework. The use of such a framework of quantification enables various practical uses such as predicting the future of the architectural industry with regards to which tasks will be automated, as well as making more informed decisions on the subject of automation on multiple levels ranging from individual decisions to policy making from governing bodies such as the RIBA. This is achieved by analyzing the design process as a generic task that needs to be performed, then using principles of work breakdown systems to split the task of designing an entire building into smaller tasks, which can then be recursively split further as required. Each task is then assigned a series of milestones that allow for the objective analysis of its automation progress. By combining these two approaches it is possible to create a data structure that describes how much various parts of the architectural design process are automated. The data gathered in the paper serves the dual purposes of providing the framework with validation, as well as giving insights into the current situation of automation within the architectural design process. The framework can be interrogated in many ways and preliminary analysis shows that almost 40% of the architectural design process has been automated in some practical fashion at the time of writing, with the rate at which progress is made slowly increasing over the years, with the majority of tasks in the design process reaching a new milestone in automation in less than 6 years. Additionally, a further 15% of the design process is currently being automated in some way, with various products in development but not yet released to the industry. Lastly, various limitations of the framework are examined in this paper as well as further areas of study.

Keywords: analysis, architecture, automation, design process, technology

Procedia PDF Downloads 81
1121 Design of a Portable Shielding System for a Newly Installed NaI(Tl) Detector

Authors: Mayesha Tahsin, A.S. Mollah

Abstract:

Recently, a 1.5x1.5 inch NaI(Tl) detector based gamma-ray spectroscopy system has been installed in the laboratory of the Nuclear Science and Engineering Department of the Military Institute of Science and Technology for radioactivity detection purposes. The newly installed NaI(Tl) detector has a circular lead shield of 22 mm width. An important consideration of any gamma-ray spectroscopy is the minimization of natural background radiation not originating from the radioactive sample that is being measured. Natural background gamma-ray radiation comes from naturally occurring or man-made radionuclides in the environment or from cosmic sources. Moreover, the main problem with this system is that it is not suitable for measurements of radioactivity with a large sample container like Petridish or Marinelli beaker geometry. When any laboratory installs a new detector or/and new shield, it “must” first carry out quality and performance tests for the detector and shield. This paper describes a new portable shielding system with lead that can reduce the background radiation. Intensity of gamma radiation after passing the shielding will be calculated using shielding equation I=Ioe-µx where Io is initial intensity of the gamma source, I is intensity after passing through the shield, µ is linear attenuation coefficient of the shielding material, and x is the thickness of the shielding material. The height and width of the shielding will be selected in order to accommodate the large sample container. The detector will be surrounded by a 4π-geometry low activity lead shield. An additional 1.5 mm thick shield of tin and 1 mm thick shield of copper covering the inner part of the lead shielding will be added in order to remove the presence of characteristic X-rays from the lead shield.

Keywords: shield, NaI (Tl) detector, gamma radiation, intensity, linear attenuation coefficient

Procedia PDF Downloads 134
1120 Natural Language Processing; the Future of Clinical Record Management

Authors: Khaled M. Alhawiti

Abstract:

This paper investigates the future of medicine and the use of Natural language processing. The importance of having correct clinical information available online is remarkable; improving patient care at affordable costs could be achieved using automated applications to use the online clinical information. The major challenge towards the retrieval of such vital information is to have it appropriately coded. Majority of the online patient reports are not found to be coded and not accessible as its recorded in natural language text. The use of Natural Language processing provides a feasible solution by retrieving and organizing clinical information, available in text and transforming clinical data that is available for use. Systems used in NLP are rather complex to construct, as they entail considerable knowledge, however significant development has been made. Newly formed NLP systems have been tested and have established performance that is promising and considered as practical clinical applications.

Keywords: clinical information, information retrieval, natural language processing, automated applications

Procedia PDF Downloads 382
1119 Fast Generation of High-Performance Driveshafts: A Digital Approach to Automated Linked Topology and Design Optimization

Authors: Willi Zschiebsch, Alrik Dargel, Sebastian Spitzer, Philipp Johst, Robert Böhm, Niels Modler

Abstract:

In this article, we investigate an approach that digitally links individual development process steps by using the drive shaft of an aircraft engine as a representative example of a fiber polymer composite. Such high-performance, lightweight composite structures have many adjustable parameters that influence the mechanical properties. Only a combination of optimal parameter values can lead to energy efficient lightweight structures. The development tools required for the Engineering Design Process (EDP) are often isolated solutions, and their compatibility with each other is limited. A digital framework is presented in this study, which allows individual specialised tools to be linked via the generated data in such a way that automated optimization across programs becomes possible. This is demonstrated using the example of linking geometry generation with numerical structural analysis. The proposed digital framework for automated design optimization demonstrates the feasibility of developing a complete digital approach to design optimization. The methodology shows promising potential for achieving optimal solutions in terms of mass, material utilization, eigenfrequency, and deformation under lateral load with less development effort. The development of such a framework is an important step towards promoting a more efficient design approach that can lead to stable and balanced results.

Keywords: digital linked process, composite, CFRP, multi-objective, EDP, NSGA-2, NSGA-3, TPE

Procedia PDF Downloads 53
1118 Capacity Estimation of Hybrid Automated Repeat Request Protocol for Low Earth Orbit Mega-Constellations

Authors: Arif Armagan Gozutok, Alper Kule, Burak Tos, Selman Demirel

Abstract:

Wireless communication chain requires effective ways to keep throughput efficiency high while it suffers location-dependent, time-varying burst errors. Several techniques are developed in order to assure that the receiver recovers the transmitted information without errors. The most fundamental approaches are error checking and correction besides re-transmission of the non-acknowledged packets. In this paper, stop & wait (SAW) and chase combined (CC) hybrid automated repeat request (HARQ) protocols are compared and analyzed in terms of throughput and average delay for the usage of low earth orbit (LEO) mega-constellations case. Several assumptions and technological implementations are considered as well as usage of low-density parity check (LDPC) codes together with several constellation orbit configurations.

Keywords: HARQ, LEO, satellite constellation, throughput

Procedia PDF Downloads 123
1117 Fault Tolerant (n,k)-star Power Network Topology for Multi-Agent Communication in Automated Power Distribution Systems

Authors: Ning Gong, Michael Korostelev, Qiangguo Ren, Li Bai, Saroj K. Biswas, Frank Ferrese

Abstract:

This paper investigates the joint effect of the interconnected (n,k)-star network topology and Multi-Agent automated control on restoration and reconfiguration of power systems. With the increasing trend in development in Multi-Agent control technologies applied to power system reconfiguration in presence of faulty components or nodes. Fault tolerance is becoming an important challenge in the design processes of the distributed power system topology. Since the reconfiguration of a power system is performed by agent communication, the (n,k)-star interconnected network topology is studied and modeled in this paper to optimize the process of power reconfiguration. In this paper, we discuss the recently proposed (n,k)-star topology and examine its properties and advantages as compared to the traditional multi-bus power topologies. We design and simulate the topology model for distributed power system test cases. A related lemma based on the fault tolerance and conditional diagnosability properties is presented and proved both theoretically and practically. The conclusion is reached that (n,k)-star topology model has measurable advantages compared to standard bus power systems while exhibiting fault tolerance properties in power restoration, as well as showing efficiency when applied to power system route discovery.

Keywords: (n, k)-star topology, fault tolerance, conditional diagnosability, multi-agent system, automated power system

Procedia PDF Downloads 493
1116 Fault Tolerant (n, k)-Star Power Network Topology for Multi-Agent Communication in Automated Power Distribution Systems

Authors: Ning Gong, Michael Korostelev, Qiangguo Ren, Li Bai, Saroj Biswas, Frank Ferrese

Abstract:

This paper investigates the joint effect of the interconnected (n,k)-star network topology and Multi-Agent automated control on restoration and reconfiguration of power systems. With the increasing trend in development in Multi-Agent control technologies applied to power system reconfiguration in presence of faulty components or nodes. Fault tolerance is becoming an important challenge in the design processes of the distributed power system topology. Since the reconfiguration of a power system is performed by agent communication, the (n,k)-star interconnected network topology is studied and modeled in this paper to optimize the process of power reconfiguration. In this paper, we discuss the recently proposed (n,k)-star topology and examine its properties and advantages as compared to the traditional multi-bus power topologies. We design and simulate the topology model for distributed power system test cases. A related lemma based on the fault tolerance and conditional diagnosability properties is presented and proved both theoretically and practically. The conclusion is reached that (n,k)-star topology model has measurable advantages compared to standard bus power systems while exhibiting fault tolerance properties in power restoration, as well as showing efficiency when applied to power system route discovery.

Keywords: (n, k)-star topology, fault tolerance, conditional diagnosability, multi-agent system, automated power system

Procedia PDF Downloads 440
1115 Assessment of Knowledge and Attitude towards End of Life Care among Nurses Working in Tertiary Hospital

Authors: Emni Omar Daw Hussin, Pathmawathi Subramanian, Wong Li Ping

Abstract:

Background: To provide quality care at the end of life, nurses should possess knowledge and skills to provide effective end-of-life care, as well as develop the attitudes and interpersonal competence to provide compassionate care. Aim: This study aimed to assess nurses’ knowledge and attitude towards end of life care and caring for terminal ill patients and to examine relationships among demographic variables and nurse’s knowledge and attitudes toward end of life care and caring for terminal ill patients. Method: a cross-sectional study was conducted at 1 tertiary hospital located in Kuala Lumpur, Malaysia. Self-administrative questionnaire was used to collect data from 553 nurses from over all departments except emergency department, operation theater and outpatient clinic. Two tools were used in this study, the Frommelt’s Attitude Toward Care of the Dying (FATCOD) Scale to assess the nurses’ attitude and End of Life Knowledge Assessment to assess the nurses’ knowledge. Result: the result of this study yielded that, the majority of participants (54.8%) and (54.4%) have less positive attitude and knowledge towards end of life care and caring for terminal ill patients respectively. As well as there is no significant relationship were found between nurses’ ethnicity, religion, and the total score of FATCOD scale; End of Life Knowledge Assessment score. On other hand there is significant relationship among nurses’ age, working experience, level of education, attending any post basic courses and the total score of both FATCOD scale and End of Life Knowledge Assessment. Conclusion: A lack of education and experience and post basic course about end of life care and palliative care may contribute to the negative attitudes and poor knowledge regarding end of life care. Providing sufficient courses about end of life care could enhance the nurses’ knowledge towards end of life care, as well as providing a reflective narrative environment in which nurses can express their personal feelings about death and dying could be a potentially effective approach. Implication for Practice: This study elaborates the need for further research to develop an effective educational programs to enhance nurses’ knowledge and to promote positive attitude towards death and dying, as well as enhance communication skills, and coping strategies.

Keywords: knowledge, attitude, nurse, end of life care

Procedia PDF Downloads 425
1114 Automated Computer-Vision Analysis Pipeline of Calcium Imaging Neuronal Network Activity Data

Authors: David Oluigbo, Erik Hemberg, Nathan Shwatal, Wenqi Ding, Yin Yuan, Susanna Mierau

Abstract:

Introduction: Calcium imaging is an established technique in neuroscience research for detecting activity in neural networks. Bursts of action potentials in neurons lead to transient increases in intracellular calcium visualized with fluorescent indicators. Manual identification of cell bodies and their contours by experts typically takes 10-20 minutes per calcium imaging recording. Our aim, therefore, was to design an automated pipeline to facilitate and optimize calcium imaging data analysis. Our pipeline aims to accelerate cell body and contour identification and production of graphical representations reflecting changes in neuronal calcium-based fluorescence. Methods: We created a Python-based pipeline that uses OpenCV (a computer vision Python package) to accurately (1) detect neuron contours, (2) extract the mean fluorescence within the contour, and (3) identify transient changes in the fluorescence due to neuronal activity. The pipeline consisted of 3 Python scripts that could both be easily accessed through a Python Jupyter notebook. In total, we tested this pipeline on ten separate calcium imaging datasets from murine dissociate cortical cultures. We next compared our automated pipeline outputs with the outputs of manually labeled data for neuronal cell location and corresponding fluorescent times series generated by an expert neuroscientist. Results: Our results show that our automated pipeline efficiently pinpoints neuronal cell body location and neuronal contours and provides a graphical representation of neural network metrics accurately reflecting changes in neuronal calcium-based fluorescence. The pipeline detected the shape, area, and location of most neuronal cell body contours by using binary thresholding and grayscale image conversion to allow computer vision to better distinguish between cells and non-cells. Its results were also comparable to manually analyzed results but with significantly reduced result acquisition times of 2-5 minutes per recording versus 10-20 minutes per recording. Based on these findings, our next step is to precisely measure the specificity and sensitivity of the automated pipeline’s cell body and contour detection to extract more robust neural network metrics and dynamics. Conclusion: Our Python-based pipeline performed automated computer vision-based analysis of calcium image recordings from neuronal cell bodies in neuronal cell cultures. Our new goal is to improve cell body and contour detection to produce more robust, accurate neural network metrics and dynamic graphs.

Keywords: calcium imaging, computer vision, neural activity, neural networks

Procedia PDF Downloads 65
1113 A Small-Molecular Inhibitor of Influenza Virus via Disrupting the PA and PB1 Interaction of the Viral Polymerase

Authors: Shuofeng Yuan, Bojian Zheng

Abstract:

Assembly of the heterotrimeric polymerase complex of influenza virus from the individual subunits PB1, PA, and PB2 is a prerequisite for viral replication, in which the interaction between the N-terminal of PB1 (PB1N) and the C terminal of PA (PAC) may be a desired target for antiviral development. In this study, we first compared the feasibility of high throughput screening by enzyme-linked immunosorbent assay (ELISA) and fluorescence polarization (FP) assay. Among the two, ELISA was demonstrated to own broader dynamic range so that it was used for screening inhibitors, which blocked PA and PB1 interaction. Several binding inhibitors of PAC-PB1N were identified and subsequently tested for the antiviral efficacy. Apparently, 3-(2-chlorophenyl)-6-ethyl-7-methyl[1,2,4]triazolo[4,3-a]pyrimidin-5-ol, designated ANA-1, was found to be a strong inhibitor of PAC-PB1N interaction and act as a potent antiviral agent against the infections of multiple subtypes of influenza A virus, including H1N1, H3N2, H5N1, H7N7, H7N9 and H9N2 subtypes, in cell cultures. Intranasal administration of ANA-1 protected mice from lethal challenge and reduced lung viral loads in H1N1 virus infected BALB/c mice. Docking analyses predicted that ANA-1 bound to an allosteric site of PAC, which would cause conformational changes thereby disrupting the PAC-PB1N interaction. Overall, our study has identified a novel compound with potential to be developed as an anti-influenza drug.

Keywords: influenza, antiviral, viral polymerase, compounds

Procedia PDF Downloads 330
1112 Genome-Wide Isoform Specific KDM5A/JARID1A/RBP2 Location Analysis Reveals Contribution of Chromatin-Interacting PHD Domain in Protein Recruitment to Binding Sites

Authors: Abul B. M. M. K. Islam, Nuria Lopez-Bigas, Elizaveta V. Benevolenskaya

Abstract:

RBP2 has shown to be important for cell differentiation control through epigenetic mechanism. The main aim of the present study is genome-wide location analysis of human RBP2 isoforms that differ in a histone-binding domain by ChIPseq. It is conceivable that the larger isoform (LI) of RBP2, which contains a specific H3K4me3 interacting domain, differs from the smaller isoform (SI) in genomic location, may account for the observed diversity in RBP2 function. To distinguish the two RBP2 isoforms, we used the fact that the SI lacks the C-terminal PHD domain and hence used the antibodies detecting both RBP2 isoforms (AI) through a common central domain, and the antibodies detecting only LI but not SI, through a C-terminal PHD domain. Overall our analysis suggests that RBP2 occupies about 77 nucleotides and binds GC rich motifs of active genes, does not bind to centromere, telomere, or enhancer regions, and binding sites are conserved compare to random. A striking difference between the only-SI and only-LI is that a large number of only-SI peaks are located in CpG islands and close to TSS compared to only-LI peaks. Enrichment analysis of the related genes indicates that several oncogenic pathways and metabolic pathways/processes are significantly enriched among only-SI/AI targets, but not LI/only-LI peak’s targets.

Keywords: bioinformatics, cancer, ChIP-seq, KDM5A

Procedia PDF Downloads 287
1111 A Framework for an Automated Decision Support System for Selecting Safety-Conscious Contractors

Authors: Rawan A. Abdelrazeq, Ahmed M. Khalafallah, Nabil A. Kartam

Abstract:

Selection of competent contractors for construction projects is usually accomplished through competitive bidding or negotiated contracting in which the contract bid price is the basic criterion for selection. The evaluation of contractor’s safety performance is still not a typical criterion in the selection process, despite the existence of various safety prequalification procedures. There is a critical need for practical and automated systems that enable owners and decision makers to evaluate contractor safety performance, among other important contractor selection criteria. These systems should ultimately favor safety-conscious contractors to be selected by the virtue of their past good safety records and current safety programs. This paper presents an exploratory sequential mixed-methods approach to develop a framework for an automated decision support system that evaluates contractor safety performance based on a multitude of indicators and metrics that have been identified through a comprehensive review of construction safety research, and a survey distributed to domain experts. The framework is developed in three phases: (1) determining the indicators that depict contractor current and past safety performance; (2) soliciting input from construction safety experts regarding the identified indicators, their metrics, and relative significance; and (3) designing a decision support system using relational database models to integrate the identified indicators and metrics into a system that assesses and rates the safety performance of contractors. The proposed automated system is expected to hold several advantages including: (1) reducing the likelihood of selecting contractors with poor safety records; (2) enhancing the odds of completing the project safely; and (3) encouraging contractors to exert more efforts to improve their safety performance and practices in order to increase their bid winning opportunities which can lead to significant safety improvements in the construction industry. This should prove useful to decision makers and researchers, alike, and should help improve the safety record of the construction industry.

Keywords: construction safety, contractor selection, decision support system, relational database

Procedia PDF Downloads 260
1110 Automatic Aggregation and Embedding of Microservices for Optimized Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.

Keywords: aggregation, deployment, embedding, resource allocation

Procedia PDF Downloads 179
1109 Automated User Story Driven Approach for Web-Based Functional Testing

Authors: Mahawish Masud, Muhammad Iqbal, M. U. Khan, Farooque Azam

Abstract:

Manual writing of test cases from functional requirements is a time-consuming task. Such test cases are not only difficult to write but are also challenging to maintain. Test cases can be drawn from the functional requirements that are expressed in natural language. However, manual test case generation is inefficient and subject to errors.  In this paper, we have presented a systematic procedure that could automatically derive test cases from user stories. The user stories are specified in a restricted natural language using a well-defined template.  We have also presented a detailed methodology for writing our test ready user stories. Our tool “Test-o-Matic” automatically generates the test cases by processing the restricted user stories. The generated test cases are executed by using open source Selenium IDE.  We evaluate our approach on a case study, which is an open source web based application. Effectiveness of our approach is evaluated by seeding faults in the open source case study using known mutation operators.  Results show that the test case generation from restricted user stories is a viable approach for automated testing of web applications.

Keywords: automated testing, natural language, restricted user story modeling, software engineering, software testing, test case specification, transformation and automation, user story, web application testing

Procedia PDF Downloads 365
1108 ISMARA: Completely Automated Inference of Gene Regulatory Networks from High-Throughput Data

Authors: Piotr J. Balwierz, Mikhail Pachkov, Phil Arnold, Andreas J. Gruber, Mihaela Zavolan, Erik van Nimwegen

Abstract:

Understanding the key players and interactions in the regulatory networks that control gene expression and chromatin state across different cell types and tissues in metazoans remains one of the central challenges in systems biology. Our laboratory has pioneered a number of methods for automatically inferring core gene regulatory networks directly from high-throughput data by modeling gene expression (RNA-seq) and chromatin state (ChIP-seq) measurements in terms of genome-wide computational predictions of regulatory sites for hundreds of transcription factors and micro-RNAs. These methods have now been completely automated in an integrated webserver called ISMARA that allows researchers to analyze their own data by simply uploading RNA-seq or ChIP-seq data sets and provides results in an integrated web interface as well as in downloadable flat form. For any data set, ISMARA infers the key regulators in the system, their activities across the input samples, the genes and pathways they target, and the core interactions between the regulators. We believe that by empowering experimental researchers to apply cutting-edge computational systems biology tools to their data in a completely automated manner, ISMARA can play an important role in developing our understanding of regulatory networks across metazoans.

Keywords: gene expression analysis, high-throughput sequencing analysis, transcription factor activity, transcription regulation

Procedia PDF Downloads 44
1107 Surface Modification of TiO2 Layer with Phosphonic Acid Monolayer in Perovskite Solar Cells: Effect of Chain Length and Terminal Functional Group

Authors: Seid Yimer Abate, Ding-Chi Huang, Yu-Tai Tao

Abstract:

In this study, charge extraction characteristics at the perovskite/TiO2 interface in the conventional perovskite solar cell is studied by interface engineering. Self-assembled monolayers of phosphonic acids with different chain length and terminal functional group were used to modify mesoporous TiO2 surface to modulate the surface property and interfacial energy barrier to investigate their effect on charge extraction and transport from the perovskite to the mp-TiO2 and then the electrode. The chain length introduces a tunnelling distance and the end group modulate the energy level alignment at the mp-TiO2 and perovskite interface. The work function of these SAM-modified mp-TiO2 varied from −3.89 eV to −4.61 eV, with that of the pristine mp-TiO2 at −4.19 eV. A correlation of charge extraction and transport with respect to the modification was attempted. The study serves as a guide to engineer ETL interfaces with simple SAMs to improve the charge extraction, carrier balance and device long term stability. In this study, a maximum PCE of ~16.09% with insignificant hysteresis was obtained, which is 17% higher than the standard device.

Keywords: Energy level alignment, Interface engineering, Perovskite solar cells, Phosphonic acid monolayer, Tunnelling distance

Procedia PDF Downloads 107
1106 Comparison of Nucleic Acid Extraction Platforms On Tissue Samples

Authors: Siti Rafeah Md Rafei, Karen Wang Yanping, Park Mi Kyoung

Abstract:

Tissue samples are precious supply for molecular studies or disease identification diagnosed using molecular assays, namely real-time PCR (qPCR). It is critical to establish the most favorable nucleic acid extraction that gives the PCR-amplifiable genomic DNA. Furthermore, automated nucleic acid extraction is an appealing alternative to labor-intensive manual methods. Operational complexity, defined as the number of steps required to obtain an extracted sample, is one of the criteria in the comparison. Here we are comparing the One BioMed’s automated X8 platform with the commercially available manual-operated kits from QIAGEN Mini Kit and Roche. We extracted DNA from rat fresh-frozen tissue (from different type of organs) in the matrices. After tissue pre-treatment, it is added to the One BioMed’s X8 pre-filled cartridge, and the QIAGEN QIAmp column respectively. We found that the results after subjecting the eluates to the Real Time PCR using BIORAD CFX are comparable.

Keywords: DNA extraction, frozen tissue, PCR, qPCR, rat

Procedia PDF Downloads 132
1105 Determination of the Thermally Comfortable Air Temperature with Consideration of Individual Clothing and Activity as Preparation for a New Smart Home Heating System

Authors: Alexander Peikos, Carole Binsfeld

Abstract:

The aim of this paper is to determine a thermally comfortable air temperature in an automated living room. This calculated temperature should serve as input for a user-specific and dynamic heating control in such a living space. In addition to the usual physical factors (air temperature, humidity, air velocity, and radiation temperature), individual clothing and activity should be taken into account. The calculation of such a temperature is based on different methods and indices which are usually used for the evaluation of the thermal comfort. The thermal insulation of the worn clothing is determined with a Radio Frequency Identification system. The activity performed is only taken into account indirectly through the generated heart rate. All these methods are ultimately very well suited for use in temperature regulation in an automated home, but still require further research and extensive evaluation.

Keywords: smart home, thermal comfort, predicted mean vote, radio frequency identification

Procedia PDF Downloads 139
1104 Effects of Matrix Properties on Surfactant Enhanced Oil Recovery in Fractured Reservoirs

Authors: Xiaoqian Cheng, Jon Kleppe, Ole Torsæter

Abstract:

The properties of rocks have effects on efficiency of surfactant. One objective of this study is to analyze the effects of rock properties (permeability, porosity, initial water saturation) on surfactant spontaneous imbibition at laboratory scale. The other objective is to evaluate existing upscaling methods and establish a modified upscaling method. A core is put in a container that is full of surfactant solution. Assume there is no space between the bottom of the core and the container. The core is modelled as a cuboid matrix with a length of 3.5 cm, a width of 3.5 cm, and a height of 5 cm. The initial matrix, brine and oil properties are set as the properties of Ekofisk Field. The simulation results of matrix permeability show that the oil recovery rate has a strong positive linear relationship with matrix permeability. Higher oil recovery is obtained from the matrix with higher permeability. One existing upscaling method is verified by this model. The study on matrix porosity shows that the relationship between oil recovery rate and matrix porosity is a negative power function. However, the relationship between ultimate oil recovery and matrix porosity is a positive power function. The initial water saturation of matrix has negative linear relationships with ultimate oil recovery and enhanced oil recovery. However, the relationship between oil recovery and initial water saturation is more complicated with the imbibition time because of the transition of dominating force from capillary force to gravity force. Modified upscaling methods are established. The work here could be used as a reference for the surfactant application in fractured reservoirs. And the description of the relationships between properties of matrix and the oil recovery rate and ultimate oil recovery helps to improve upscaling methods.

Keywords: initial water saturation, permeability, porosity, surfactant EOR

Procedia PDF Downloads 139
1103 Sequence Analysis and Molecular Cloning of PROTEOLYSIS 6 in Tomato

Authors: Nurulhikma Md Isa, Intan Elya Suka, Nur Farhana Roslan, Chew Bee Lynn

Abstract:

The evolutionarily conserved N-end rule pathway marks proteins for degradation by the Ubiquitin Proteosome System (UPS) based on the nature of their N-terminal residue. Proteins with a destabilizing N-terminal residue undergo a series of condition-dependent N-terminal modifications, resulting in their ubiquitination and degradation. Intensive research has been carried out in Arabidopsis previously. The group VII Ethylene Response Factor (ERFs) transcription factors are the first N-end rule pathway substrates found in Arabidopsis and their role in regulating oxygen sensing. ERFs also function as central hubs for the perception of gaseous signals in plants and control different plant developmental including germination, stomatal aperture, hypocotyl elongation and stress responses. However, nothing is known about the role of this pathway during fruit development and ripening aspect. The plant model system Arabidopsis cannot represent fleshy fruit model system therefore tomato is the best model plant to study. PROTEOLYSIS6 (PRT6) is an E3 ubiquitin ligase of the N-end rule pathway. Two homologs of PRT6 sequences have been identified in tomato genome database using the PRT6 protein sequence from model plant Arabidopsis thaliana. Homology search against Ensemble Plant database (tomato) showed Solyc09g010830.2 is the best hit with highest score of 1143, e-value of 0.0 and 61.3% identity compare to the second hit Solyc10g084760.1. Further homology search was done using NCBI Blast database to validate the data. The result showed best gene hit was XP_010325853.1 of uncharacterized protein LOC101255129 (Solanum lycopersicum) with highest score of 1601, e-value 0.0 and 48% identity. Both Solyc09g010830.2 and uncharacterized protein LOC101255129 were genes located at chromosome 9. Further validation was carried out using BLASTP program between these two sequences (Solyc09g010830.2 and uncharacterized protein LOC101255129) to investigate whether they were the same proteins represent PRT6 in tomato. Results showed that both proteins have 100 % identity, indicates that they were the same gene represents PRT6 in tomato. In addition, we used two different RNAi constructs that were driven under 35S and Polygalacturonase (PG) promoters to study the function of PRT6 during tomato developmental stages and ripening processes.

Keywords: ERFs, PRT6, tomato, ubiquitin

Procedia PDF Downloads 224
1102 Expression of Fused Plasmodium falciparum Orotate Phosphoribosyltransferase and Orotidine 5'-Monophosphate Decarboxylase in Escherichia coli

Authors: Waranya Imprasittichai, Patsarawadee Paojinda, Sudaratana R. Krungkrai, Nirianne Marie Q. Palacpac, Toshihiro Horii, Jerapan Krungkrai

Abstract:

Fusion of the last two enzymes in the pyrimidine biosynthetic pathway in the inversed order by having COOH-terminal orotate phosphoribosyltransferase (OPRT) and NH2-terminal orotidine 5'-monophosphate decarboxylase (OMPDC), as OMPDC-OPRT, are described in many organisms. In this study, we constructed gene fusions of Plasmodium falciparum OMPDC-OPRT (1,836 bp) in pTrcHisA vector and expressed as an 6xHis-tag bifunctional protein in three Escherichia coli strains (BL21, Rosetta, TOP10) at 18 °C, 25 °C and 37 °C. The recombinant bifunctional protein was partially purified by Ni-Nitrilotriacetic acid-affinity chromatography. Specific activities of OPRT and OMPDC domains in the bifunctional enzyme expressed in E. coli TOP10 cells were approximately 3-4-fold higher than those in BL21 cells. There were no enzymatic activities when the construct vector expressed in Rosetta cells. Maximal expression of the fused gene was observed at 18 °C and the bifunctional enzyme had specific activities of OPRT and OMPDC domains in a ratio of 1:2. These results provide greater yields and better catalytic activities of the bifunctional OMPDC-OPRT enzyme for further purification and kinetic study.

Keywords: bifunctional enzyme, orotate phosphoribosyltransferase, orotidine 5'-monophosphate decarboxylase, plasmodium falciparum

Procedia PDF Downloads 335
1101 Surface Display of Lipase on Yarrowia lipolytica Cells

Authors: Evgeniya Y. Yuzbasheva, Tigran V. Yuzbashev, Natalia I. Perkovskaya, Elizaveta B. Mostova

Abstract:

Cell-surface display of lipase is of great interest as it has many applications in the field of biotechnology owing to its unique advantages: simplified product purification, and cost-effective downstream processing. One promising area of application for whole-cell biocatalysts with surface displayed lipase is biodiesel synthesis. Biodiesel is biodegradable, renewable, and nontoxic alternative fuel for diesel engines. Although the alkaline catalysis method has been widely used for biodiesel production, it has a number of limitations, such as rigorous feedstock specifications, complicated downstream processes, including removal of inorganic salts from the product, recovery of the salt-containing by-product glycerol, and treatment of alkaline wastewater. Enzymatic synthesis of biodiesel can overcome these drawbacks. In this study, Lip2p lipase was displayed on Yarrowia lipolytica cells via C- and N-terminal fusion variant. The active site of lipase is located near the C-terminus, therefore to prevent the activity loosing the insertion of glycine-serine linker between Lip2p and C-domains was performed. The hydrolytic activity of the displayed lipase reached 12,000–18,000 U/g of dry weight. However, leakage of enzyme from the cell wall was observed. In case of C-terminal fusion variant, the leakage was occurred due to the proteolytic cleavage within the linker peptide. In case of N-terminal fusion variant, the leaking enzyme was presented as three proteins, one of which corresponded to the whole hybrid protein. The calculated number of recombinant enzyme displayed on the cell surface is approximately 6–9 × 105 molecules per cell, which is close to the theoretical maximum (2 × 106 molecules/cell). Thus, we attribute the enzyme leakage to the limited space available on the cell surface. Nevertheless, cell-bound lipase exhibited greater stability to short-term and long-term temperature treatment than the native enzyme. It retained 74% of original activity at 60°C for 5 min of incubation, and 83% of original activity after incubation at 50°C during 5 h. Cell-bound lipase had also higher stability in organic solvents and detergents. The developed whole-cell biocatalyst was used for recycling biodiesel synthesis. Two repeated cycles of methanolysis yielded 84.1–% and 71.0–% methyl esters after 33–h and 45–h reactions, respectively.

Keywords: biodiesel, cell-surface display, lipase, whole-cell biocatalyst

Procedia PDF Downloads 467
1100 Studies on Phylogeny of Helicoverpa armigera Populations from North Western Himalaya Region with Help of Cytochromeoxidase I Sequence

Authors: R. M. Srivastava, Subbanna A.R.N.S, Md Abbas Ahmad, S. P.More, Shivashankar, B. Kalyanbabu

Abstract:

The similar morphology associated with high genetic variability poses problems in phylogenetic studies of Helicoverpa armigera (Hubner). To identify genetic variation of North Western Himalayan population’s, partial (Mid to terminal region) cytochrome c oxidase subunit I (COX-1) gene was amplified and sequenced for three populations collected from Pantnagar, Almora, and Chinyalisaur. The alignment of sequences with other two populations, Nagpur representing central India population and Anhui, China representing complete COX-1 sequence revealed unanimity in middle region with eleven single nucleotide polymorphisms (SNPs) in Nagpur populations. However, the consensus is missing when approaching towards terminal region, which is associated with 15 each SNPs and pair base substitutions in Chinyalisaur populations. In minimum evolution tree, all the five populations were majorly separated into two clades, one comprising of only Nagpur population and the other with rest. Amongst, North Western populations, Chinyalisaur one is promising by farming a separate clade. The pairwise genetic distance ranges from 0.025 to 0.192 with the maximum between H. armigera populations of Nagpur and Chinyalisaur. This genetic isolation of populations can be attributed to a key role of topological barriers of weather and mountain ranges and temporal barriers due to cropping patterns.

Keywords: cytochrome c oxidase subunit I, northwestern Himalayan population, Helicoverpa armigera (Noctuidae: Lepidoptera), phylogenetic relationship, genetic variation

Procedia PDF Downloads 287
1099 A Combined CFD Simulation of Plateau Borders including Films and Transitional Areas of Liquid Foams

Authors: Abdolhamid Anazadehsayed, Jamal Naser

Abstract:

An integrated computational fluid dynamics model is developed for a combined simulation of Plateau borders, films, and transitional areas between the film and the Plateau borders to reduce the simplifications and shortcomings of available models for foam drainage in micro-scale. Additionally, the counter-flow related to the Marangoni effect in the transitional area is investigated. The results of this combined model show the contribution of the films, the exterior Plateau borders, and Marangoni flow in the drainage process more accurately since the inter-influence of foam's elements is included in this study. The exterior Plateau borders flow rate can be four times larger than the interior ones. The exterior bubbles can be more prominent in the drainage process in cases where the number of the exterior Plateau borders increases due to the geometry of container. The ratio of the Marangoni counter-flow to the Plateau border flow increases drastically with an increase in the mobility of air-liquid interface. However, the exterior bubbles follow the same trend with much less intensity since typically, the flow is less dependent on the interface of air-liquid in the exterior bubbles. Moreover, the Marangoni counter-flow in a near-wall transition area is less important than an internal one. The influence of air-liquid interface mobility on the average velocity of interior foams is attained with more accuracy with more realistic boundary condition. Then it has been compared with other numerical and analytical results. The contribution of films in the drainage is significant for the mobile foams as the velocity of flow in the film has the same order of magnitude as the velocity in the Plateau border. Nevertheless, for foams with rigid interfaces, film's contribution in foam drainage is insignificant, particularly for the films near the wall of the container.

Keywords: foam, plateau border, film, Marangoni, CFD, bubble

Procedia PDF Downloads 327
1098 Positive Bias and Length Bias in Deep Neural Networks for Premises Selection

Authors: Jiaqi Huang, Yuheng Wang

Abstract:

Premises selection, the task of selecting a set of axioms for proving a given conjecture, is a major bottleneck in automated theorem proving. An array of deep-learning-based methods has been established for premises selection, but a perfect performance remains challenging. Our study examines the inaccuracy of deep neural networks in premises selection. Through training network models using encoded conjecture and axiom pairs from the Mizar Mathematical Library, two potential biases are found: the network models classify more premises as necessary than unnecessary, referred to as the ‘positive bias’, and the network models perform better in proving conjectures that paired with more axioms, referred to as ‘length bias’. The ‘positive bias’ and ‘length bias’ discovered could inform the limitation of existing deep neural networks.

Keywords: automated theorem proving, premises selection, deep learning, interpreting deep learning

Procedia PDF Downloads 163